WO2015123792A1 - Image editing techniques for a device - Google Patents
Image editing techniques for a device Download PDFInfo
- Publication number
- WO2015123792A1 WO2015123792A1 PCT/CN2014/000172 CN2014000172W WO2015123792A1 WO 2015123792 A1 WO2015123792 A1 WO 2015123792A1 CN 2014000172 W CN2014000172 W CN 2014000172W WO 2015123792 A1 WO2015123792 A1 WO 2015123792A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- user input
- layer
- mobile device
- image layer
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/28—Indexing scheme for image data processing or generation, in general involving image processing hardware
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Definitions
- the present disclosure is generally related to image editing for a device.
- wireless telephones such as wireless telephones, personal digital assistants (PDAs), and paging devices.
- a wireless device may be small, lightweight, and easily carried by users.
- Wireless telephones such as cellular telephones and Internet Protocol (IP) telephones, can communicate voice and data packets over wireless networks.
- IP Internet Protocol
- wireless telephones can process executable instructions, including software applications, such as a web browser application, that can be used to access the Internet.
- many wireless telephones include other types of devices that are incorporated therein.
- a wireless telephone can also include a digital still camera, a digital video camera, a digital recorder, and an audio file player.
- wireless telephones and other mobile devices can include significant computing capabilities.
- a mobile device may include a camera and an image editing application usable to alter (or "edit") images captured with the camera.
- a user of the mobile device may capture an image using the camera and then alter the image using the image editing application, such as prior to sharing the image with friends or family.
- Certain image editing applications may enable a user perform computationally simple operations, such as removing (or "cropping") a portion of an image. More advanced image editing applications may enable a user to perform more computationally intensive operations on a mobile device, but these operations may still not provide the user sufficient control over image editing operations to achieve certain image editing effects, potentially frustrating the user of the mobile device. Advanced image editing applications may also utilize complicated user input techniques that users may find difficult or cumbersome to use. ///. Summary
- a processor may receive image data corresponding to an image.
- a mobile device may include the processor and a camera, and the camera may capture the image.
- the processor may segment the image data (e.g., using a segmentation technique) into a first image layer and a second image layer.
- the first image layer may correspond to a foreground of the image and the second image layer may correspond to a background of the image, as illustrative examples.
- the first image layer and the second image layer may each correspond to foreground portions of the image (or background portions of the image).
- the first image layer and the second image layer may be independently edited by a user to create one or more visual effects.
- a user may perform an image editing operation on the first image layer but not the second image layer (or vice versa).
- the user may utilize an image editing application to perform the image editing operation, which may be executed by the processor.
- the image editing operation may include changing a color attribute of the first image layer but not the second image layer (e.g., changing a color of an object independently of a color of another object).
- the image editing operation may include blurring the first image layer but not the second image layer, such as by "blurring” the background but not the foreground to approximate a "super focus” camera effect of a camera that uses a large aperture to capture an image in which a foreground is in focus and in which the foreground is sharper than a background.
- a user may therefore experience greater control of visual effects of an image as compared to conventional systems in which an entire image (e.g., all image layers of an image) is edited based on a particular image editing operation.
- identification of the clusters is initiated automatically in response to user input selecting the image.
- identification of the clusters may be initiated automatically in response to user selection of the image via a user interface (UI) (e.g., selection of the image from an image gallery).
- UI user interface
- Automatically identifying the clusters may "hide” a time lag associated with identification of the clusters. For example, by automatically identifying the clusters in response to selection of the image via the UI, the time lag associated with identification of the clusters may be "hidden” during loading of the image. That is, a user may perceive that the time lag is associated with loading of the image instead of with initiation of a particular image editing operation that is initiated after loading the image. In this example, when the user initiates the image editing operation, cluster
- identification may already be complete, which may cause the image editing operation to appear faster to the user.
- the processor may initiate one or more image processing operations automatically in response to user input related to the first image layer.
- processing of each individual image layer may be associated with a time lag.
- the processor may initiate one or more image processing operations associated with the first image layer (e.g., an image segmenting operation and/or an image labeling operation) using one or more identified clusters prior to receiving user input related to each of the multiple image layers.
- image processing of a foreground portion of an image may be initiated prior to receiving user input related to a background of the image. Accordingly, image editing performance of the processor may be improved as compared to a device that waits to initiate image processing operations until user input is received for each image layer.
- a mobile device may store user configuration parameters that determine image editing operations that may be performed at the mobile device in response to user input.
- the mobile device may include a display (e.g., a touchscreen).
- the display may depict an image, such as in connection with an image editing application executed by the mobile device.
- the image editing application may perform image editing operations on the image in response to user input.
- the user configuration parameters may indicate that the mobile device is to perform a particular image editing operation (e.g., a color change operation) in response to receiving first user input indicating a particular direction of movement, such as a "swipe" across the touchscreen in a vertical (or substantially vertical) direction.
- a particular image editing operation e.g., a color change operation
- the user configuration parameters may further indicate that the mobile device is to perform a second image editing operation in response to second user input indicating the particular direction received after the first user input (e.g., a subsequent vertical swipe operation).
- the user configuration parameters may indicate that an image blurring operation is to be performed in response to the second user input.
- the user configuration parameters may be configurable by a user.
- the user configuration parameters may be modified by a user to indicate that the second image editing operation is to be performed prior to the first image editing operation (e.g., in response to the first user input).
- third user input received after the first user input and after the second user input may "undo" the first image editing operation and the second image editing operation. Accordingly, image editing is simplified for a user of the mobile device. Further, image editing operations may be configurable using the user configuration parameters.
- a method of manipulating an image by a device includes segmenting image data corresponding to the image into a first image layer and a second image layer. The method further includes adjusting a first attribute of the first image layer independently of a second attribute of the second image layer based on user input.
- an apparatus in another particular embodiment, includes a memory and a processor coupled to the memory.
- the processor is configured to segment image data corresponding to an image into a first image layer and a second image layer.
- the processor is further configured to adjust a first attribute of the first image layer independently of a second attribute of the second image layer based on user input.
- a non-transitory computer-readable medium stores instructions.
- the instructions are executable by a processor to cause the processor to segment image data associated with an image into a first image layer and a second image layer.
- the instructions are further executable by the processor to adjust a first attribute of the first image layer independently of a second attribute of the second image layer based on user input.
- an apparatus includes means for segmenting image data associated with an image into a first image layer and a second image layer.
- the apparatus further includes means for adjusting a first attribute of the first image layer independently of a second attribute of the second image layer based on user input.
- a method includes displaying a first image at a mobile device. The method further includes receiving first user input at the mobile device. The first user input indicates a direction relative to the mobile device. Based on the first user input, a first image editing operation is performed on the first image to generate a second image. The method further includes displaying the second image at the mobile device and receiving second user input at the mobile device. The second user input indicates the direction. The method further includes performing a second image editing operation on the second image to generate a third image based on the second user input.
- an apparatus in another particular embodiment, includes a memory and a processor coupled to the memory.
- the processor is configured to cause a mobile device to display a first image and to receive first user input at the mobile device.
- the first user input indicates a direction relative to the mobile device.
- the processor is further configured to perform a first image editing operation on the first image to generate a second image based on the first user input, to cause the mobile device to display the second image, and to receive second user input.
- the second user input indicates the direction.
- the processor is further configured to perform a second image editing operation on the second image to generate a third image based on the second user input.
- a computer-readable medium stores instructions that are executable by a processor to cause a mobile device to display a first image at the mobile device and to receive first user input at the mobile device.
- the first user input indicates a direction relative to the mobile device.
- the instructions are further executable by the processor to perform, based on the first user input, a first image editing operation on the first image to generate a second image, to display the second image at the mobile device, and to receive second user input at the mobile device.
- the second user input indicates the direction.
- the instructions are further executable by the processor to perform, based on the second user input, a second image editing operation on the second image to generate a third image.
- an apparatus in another particular embodiment, includes means for displaying a first image at a mobile device and means for receiving first user input at the mobile device.
- the first user input indicates a direction relative to the mobile device.
- the apparatus further includes means for performing a first image editing operation on the first image to generate a second image based on the first user input, means for causing cause the mobile device to display the second image, and means for receiving second user input.
- the second user input indicates the direction.
- the apparatus further includes means for performing a second image editing operation on the second image to generate a third image based on the second user input.
- a method in another particular embodiment, includes receiving first user input from a user interface.
- the first user input selects an image for a display operation.
- the method further includes performing the display operation based on the first user input and automatically initiating a clustering operation using image data corresponding to the image based on the first user input.
- an apparatus in another particular embodiment, includes a memory and a processor coupled to the memory.
- the processor is configured to receive first user input from a user interface. The first user input selects an image for a display operation.
- the processor is further configured to perform the display operation based on the first user input and to automatically initiate a clustering operation using image data corresponding to the image based on the first user input.
- a computer-readable medium stores instructions that are executable by a processor to cause the processor to receive first user input from a user interface.
- the first user input selects an image for a display operation.
- the instructions are further executable by the processor to perform the display operation based on the first user input and to automatically initiate a clustering operation using image data corresponding to the image based on the first user input.
- an apparatus in another particular embodiment, includes means for receiving first user input from a user interface.
- the first user input selects an image for a display operation.
- the apparatus further includes means for performing the display operation based on the first user input and means for automatically initiating a clustering operation using image data corresponding to the image based on the first user input.
- embodiments is independent image editing of a first image layer and a second image layer of an image.
- a user may therefore be enabled to "fine tune" image editing operations as compared to conventional systems in which an entire image (e.g., all image layers of an image) are edited based on a particular image editing operation.
- Another particular advantage provided by at least one of the disclosed embodiments is simplified control of a user interface (UI) by a user of a mobile device.
- the UI may enable the user to set user configuration parameters that assign certain image editing operations to a particular user input (e.g., a swipe in a particular direction), which simplifies user control of an image editing application executed by the mobile device.
- Another particular advantage of at least one of the disclosed embodiments is a faster image editing experience as perceived by a user of a device.
- FIG. 1 is block diagram of a particular illustrative embodiment of a processor
- FIG. 2 illustrates aspects of certain example image processing operations that may be performed by the processor of FIG. 1 ;
- FIG. 3 illustrates additional aspects of example image processing operations that may be performed by the processor of FIG. 1 ;
- FIG. 4 illustrates additional aspects of example image processing operations that may be performed by the processor of FIG. 1 ;
- FIG. 5 illustrates additional aspects of example image processing operations that may be performed by the processor of FIG. 1 ;
- FIG. 6 is a flow diagram illustrating a method that may be performed by the processor of FIG. 1 ;
- FIG. 7 is a flow diagram illustrating another method that may be performed by the processor of FIG. 1 ;
- FIG. 8 is a block diagram of a particular illustrative embodiment of a mobile device that may include the processor of FIG. 1;
- FIG. 9 is a block diagram illustrating example operating states of a mobile device;
- FIG. 10 is a flow diagram illustrating a method that may be performed by the mobile device of FIG. 9;
- FIG. 11 is a block diagram of a particular illustrative embodiment of the mobile device of FIG. 9.
- FIG. 12 is a flow diagram illustrating a method that may be performed by a device, such as a mobile device that includes the processor of FIG. 1.
- the processor 100 includes a cluster identifier 124, an image segment generator 128, an image component labeler 132, and an image modifier 136.
- the processor 100 may be responsive to image data 102.
- the image data 102 may be received from a camera or from a camera controller associated with the camera.
- the image data 102 may include one or more image layers, such as an image layer 104a and an image layer 106a.
- the image layers 104a, 106a may correspond to a foreground portion of an image and a background portion of the image, respectively.
- the image layers 104a, 106a may each correspond to a foreground portion or may each correspond to a background portion.
- the image data 102 may further include one or more clusters of pixels (e.g., a pixel cluster corresponding to an object depicted in the image).
- FIG. 1 illustrates that the image layer 104a may include a cluster 108a and a cluster 110a.
- FIG. 1 further illustrates that the image layer 106a may include a cluster 112a and a cluster 114a.
- the clusters 108a, 110a, 112a, and 114a may include one or more attributes, such as an attribute 116, an attribute 118a, an attribute 120a, and/or an attribute 122a.
- the attributes 116, 118a, 120a, and/or 122a may correspond to visual aspects of an image, such as a color, a sharpness, a contrast, a context of the image (e.g., a setting, such as a background setting), a blurring effect and/or another aspect, as illustrative examples.
- the cluster identifier 124 may be responsive to the image data 102 to identify one or more clusters of the image using one or more cluster identification techniques. For example, the cluster identifier 124 may identify one or more clusters of the image data 102, such as one or more of the clusters 108a, 110a, 112a, or 114a.
- the cluster identifier 124 may analyze the image data 102 to generate a cluster identification 126.
- the cluster identification 126 may identify one or more of the clusters 108a, 110a, 112a, or 114a.
- a cluster may correspond to a group of similar pixels of the image data 102. To illustrate, pixels may be similar if the pixels are spatially similar (e.g., within a common threshold area) and/or if the pixels are numerically similar (e.g., within a pixel value threshold range).
- the cluster identifier 124 may perform one or more operations to compare pixels of the image data 102 to identify one or more groups of similar pixels to generate the cluster identification 126.
- the cluster identifier 124 may be configured to generate the cluster identification 126 using a "superpixel” technique that identifies one or more superpixels of the image data 102.
- the one or more superpixels may correspond to the clusters 108a, 110a, 112a, and 114a.
- the cluster identifier 124 is configured to operate in accordance with a simple linear iterative clustering (SLIC) technique.
- SLIC simple linear iterative clustering
- the SLIC technique may divide the image data into a "grid” and may compare pixels of the image data 102 within each component of the grid to identify clusters of the image data 102.
- the SLIC technique may be performed in connection with a color space model that maps colors to a multi-dimensional model, such as an International Commission on Illumination L*, a*, and b* (CIELAB) color space model.
- a spatial extent of any superpixel is approximately 25. Accordingly, pixels included in a particular superpixel may lie within a 25x25 area around the center of the superpixel (relative to the x-y plane). The 25x25 area may correspond to a "search area" for the pixels similar to each superpixel center.
- certain Euclidean distances may be perceivable by a user when implemented at a display, potentially causing poor visual appearance or another effect. If spatial pixel distances exceed such a perceptual color distance threshold, the spatial pixel distances may outweigh pixel color similarities, causing image distortion (e.g., resulting in superpixels that do not respect region boundaries, only proximity in the image plane).
- D_s corresponds to a sum of the lab distance (d_lab) and the x-y plane distance (d_xy) normalized by the grid interval size 5 and having a "compactness" determined by the variable m.
- Table 1 illustrates example pseudocode corresponding to an example operation of the cluster identifier 124.
- the image segment generator 128 may be responsive to the cluster identifier 124 to segment the image using one or more segmentation techniques. For example, the image segment generator 128 may generate a segmentation mask 130 based on the cluster identification 126. In a particular example, the segmentation mask 130 identifies one or more foreground or background layers of the image data 102, such as by separating the image layer 104a from the image layer 106a based on the cluster identification 126. The image segment generator 128 may generate the segmentation mask 130 by isolating one or more clusters identified by the cluster identifier 124 from a remainder of the image data 102. For example, the image segment generator 128 may segment (e.g., remove, partition, etc.) one or more groups of pixels indicated by the cluster identification 126 from the image data 102 to generate the segmentation mask 130.
- the image segment generator 128 may generate a segmentation mask 130 based on the cluster identification 126.
- the segmentation mask 130 identifies one or more foreground or background layers of the image data
- the image segment generator 128 is responsive to a set of superpixels z_n generated by the cluster identifier 124.
- the superpixels may be represented using the CIELAB color space model.
- the image segment generator 128 may apply a "grabcut" technique to the set of superpixels.
- the image segment generator 128 may utilize the grabcut technique to generate a Gaussian mixture model (GMM).
- GMM Gaussian mixture model
- the image segment generator 128 is configured to generate a first GMM having a first set of Gaussian distributions corresponding to superpixels of a foreground of an image and is further configured to generate a second GMM having a second set of Gaussian distributions corresponding to superpixels of a background of the image.
- the GMM component may be assigned to each pixel.
- Operation of the image segment generator 128 may be associated with an energy consumption, such as a Gibbs energy corresponding to
- the smoothness term V is unchanged relative to a monochrome example except that the contrast term is computed using Euclidean distance in color space according to:
- Table 2 illustrates example pseudo-code corresponding to an example operation of the processor 100.
- the superpixels on foreground are set to T F ;
- the superpixels on background are set to T B .
- the image component labeler 132 may be responsive to the image segment generator 128.
- the image component labeler 132 may analyze the segmentation mask 130 for one or more image artifacts, such as an image artifact 134.
- the image artifact 134 may correspond to a portion of the image that is "unintentionally" separated from another portion of the image.
- a portion of the image may be "misidentified" as being in the foreground or background of the image, and the image artifact 134 may correspond to the "misidentified" portion.
- the image component labeler 132 may be responsive to user input to identify the image artifact 134.
- the image component labeler 132 is configured to compensate for operation of the image segment generator 128.
- the segmentation mask 130 may have one or more "holes" due to color-based operation of the image segment generator 128 (e.g., image artifacts, such as the image artifact 134),
- one or more objects or layers may be "mislabeled” due to color similarity. For example, different colors of a common object may be mislabeled as foreground and background, and/or similar colors of different objects may be mislabeled as foreground or background.
- the image component labeler 132 may be configured to operate on a foreground region as an object, and the object may be operated on as a domain, such as a "simple-connectivity domain.”
- Table 3 illustrates example pseudo-code corresponding to an example operation of the image component labeler 132.
- the image modifier 136 may be responsive the image component labeler 132.
- the image modifier 136 may be further responsive to user input, such as via a user interface (UI) of a device that includes the processor 100, to adjust a first attribute of a first layer of the image data 102 independently of a second attribute of a second layer of the image data 102.
- UI user interface
- FIG. 1 illustrates that the image modifier 136 may generate modified image data 138 corresponding to the image data 102.
- the modified image data 138 may depict independent modification of the image layer 104a with respect to the image layer 106a.
- the modified image data 138 may include an image layer 104b
- the image layer 104b may include clusters 108b, 110b corresponding to the clusters 108a, 110a, and the image layer 106b may include clusters 112b, 114b corresponding to the clusters 112a, 114a.
- the example of FIG. 1 illustrates that the cluster 108b has an attribute 140 that has been modified (e.g., based on the user input) relative to the attribute 116.
- the user input may indicate modification of a color attribute, a sharpness attribute, a blurring attribute, and/or a context attribute of the image data 102 to cause the processor 100 to generate the modified image data 138.
- FIG. 1 illustrates that the attribute 116 has been modified to generate the attribute 140 independently of one or more other attributes, such as independently of the attributes 118b, 120b, and 122b (which may remain unchanged relative to the attributes 118a, 120a, and 122a or which may be adjusted, depending on the particular user input).
- the attribute 116 has been modified to generate the attribute 140 independently of one or more other attributes, such as independently of the attributes 118b, 120b, and 122b (which may remain unchanged relative to the attributes 118a, 120a, and 122a or which may be adjusted, depending on the particular user input).
- FIG. 1 illustrate independent adjustment of multiple layers of an image to achieve one or more visual effects.
- the example of FIG. 1 therefore enables increased user control of image editing operations of a device that includes the processor 100.
- a user of a device may
- the background may be blurred to approximate a "super focus” camera effect.
- FIG. 1 describes an example of a "superpixel-based grabcut” technique for extracting image layers.
- Certain conventional image processing techniques attempt to segment an image "globally" (or on a "per pixel” basis).
- the example of FIG. 1 identifies clusters of the image data 102 and segments an image based on the clusters, which may improve performance of image processing operations as compared to global techniques.
- image refinement operations e.g., one or more algorithm iterations to "correct" one or more boundaries of an image layer or object
- edge recall and compactness are two features of a clustering technique (e.g., SLIC).
- Edge recall may be associated with enhanced boundary detection, and compactness may be useful in connection with an image segmenting operation (e.g., grabcut).
- an image segmenting operation e.g., grabcut
- a device that utilizes a superpixel-based grabcut technique may feature improved performance.
- the image 200 includes a background 202 and a foreground 204.
- the background 202 corresponds to the image layer 104a
- the foreground 204 corresponds to the image layer 106a.
- the image 200 may correspond to the image data 102 (e.g., the image data 102 may represent the image 200).
- FIG. 2 further illustrates a clustered image 210.
- the clustered image 210 may be generated by the cluster identifier 124.
- the clustered image 210 includes multiple clusters of pixels of the image 200, such as a representative cluster 212.
- the cluster 212 may be identified by the cluster identification 126.
- FIG. 2 further illustrates a resulting image 220.
- the resulting image 220 illustrates adjustment of a first attribute of a first layer of the image 200 independently of a second attribute of a second layer of the image 200 based on the clustered image 210.
- the background portion 202 of the image 200 has been removed to generate the resulting image 220.
- the background 202 may be removed based on the clustered image 210, such as based on similarities of clusters of the clustered image 210.
- predetermined content can be substituted for the background 202.
- a forest scene corresponding to the background 202 may be replaced with a beach scene (or another scene).
- FIG. 2 illustrates independent modification of layers of an image.
- a user of a device may therefore experience greater control of image editing operations as compared to a device that applies image editing effects to an entire image.
- FIG. 3 depicts an example of an image 300, an illustrative depiction of a segmentation mask 310 corresponding to the image 300, and a modified image 320.
- the modified image 320 is generated using the segmentation mask 310 to modify the image 300.
- the segmentation mask 310 identifies multiple foreground objects.
- the image segment generator 128 of FIG. 1 may segment the image 300 by segmenting the multiple foreground objects relative to a background of the image 300. In this manner, independent modification of image layer attributes is enabled.
- the modified image 320 includes a blurred background. Further, the modified image 320 may include one or more foreground objects that have been modified relative to the image 300, such as by changing a color attribute of the one or more foreground objects. To illustrate, the segmentation mask 310 identifies multiple foreground objects which each can be independently modified relative to each other (and relative to the background), such as by modifying a shirt color of one foreground object independently of a shirt color of another foreground object.
- FIG. 3 illustrates that a segmentation mask (such as the segmentation mask 310) may be used to enable independent adjustment of attributes of layers of an image.
- the segmentation mask 310 may enable independent color adjustment of foreground portions of the image 300.
- an image is depicted and generally designated 400.
- the image 400 may be displayed at a user interface (UI).
- the UI may enable a user to independently adjust a first attribute of a first layer of the image 400 relative to a second attribute of a second layer of the image 400.
- the user input 402 may correspond to a swipe action at a display device by a user, such as at a display device of a mobile device displaying the image 400.
- the user input 402 may indicate an image layer of the image 400, such as an image background.
- the user input 402 may indicate at least a threshold number of pixels of the UI in order to select the image layer of the image 400.
- FIG. 4 further illustrates an image 410.
- a background portion has been removed, such as in response to the user input 402.
- the background portion of the image 400 may be removed using a cluster identification and/or segmentation technique, such as one or more techniques described with reference to FIGS. 1-3.
- FIG. 4 further illustrates an image 420 corresponding to a refinement of the image 410. For example, based on additional user input, an additional background portion of the image 410 may be removed to generate the image 420.
- user input may be received to update the image 420 to remove one or more additional background portions.
- additional user input may be received to generate a segmentation mask 430.
- the segmentation mask 430 includes an image artifact 432.
- the image artifact 432 may correspond to the image artifact 134 of FIG. 1.
- operation of the image segment generator 128 may generate a segmentation mask 130 corresponding to the segmentation mask 430.
- the segmentation mask 130 may include one or more image artifacts, such as the image artifact 432.
- the image component labeler 132 is operative to remove the image artifact 432 to generate an image 440.
- the image 400 may correspond to the modified image data 138.
- FIG. 4 illustrates techniques to enable greater control by a user of image editing operations.
- FIG. 4 further illustrates removal of an image artifact, such as the image artifact 432, to further improve quality of image editing operations.
- the techniques of FIG. 4 may be utilized in connection with a user interface (UI), as described further with reference to FIG. 5.
- UI user interface
- FIG. 5 illustrates an example of a user interface (UI) 500.
- the UI 500 may be presented at a display, such as at a display of a mobile device.
- the display may correspond to a touchscreen display configured to receive user input.
- the UI 500 indicates multiple images that are presented to a user in connection with an image editing application (e.g., a mobile device application that graphically presents images and that facilitates image editing operations on the images).
- FIG. 5 further illustrates a UI 510 corresponding to the UI 500 upon selection of an image 502.
- the image editing application may enlarge the image 502 to generate a UI 520 (e.g., by enlarging the image from a thumbnail view to a full view).
- the user input 504 may correspond to a swipe action or a tap action at the UI 510, as illustrative examples.
- a user interface (UI) 530 depicts the image 502 in connection with multiple buttons, such as buttons 532, 534, 536, 538.
- the buttons 532, 534, 536, and 538 may be assigned one or more operations, such as adjustment of image attributes of the image 502.
- a user may select the button 534.
- the button 534 may be selected by a user to facilitate indication of a background or foreground portion of the image depicted by the UI 530.
- FIG. 5 illustrates a UI 540 in which user input is received to designate a background and/or foreground of the image 502.
- the user input may correspond to the user input 402 of FIG. 4.
- a user may select the button 534 (e.g., to enter a background designation mode of operation) and then enter user input (e.g., the user input 402) to designate a background portion of the image displayed at the UI 540.
- the user input may correspond to a swipe action that designates a background portion of the image.
- buttons 532, 536, and 538 may function as a foreground designation button usable to designate a foreground portion of the image.
- One or more of the buttons 532, 536, 538 may correspond to default operations (e.g., associated with a particular image editing application) and/or user-defined operations (e.g., user-defined operations based on user preference input).
- buttons 532, 536, and 538 may enable a user to select between the multiple objects.
- the button 536 may be used to designate a first foreground object
- the button 538 may be used to designate a second foreground object.
- a user swipe action indicating the first foreground object may initiate an image editing operation targeting the first foreground object.
- a user swipe action indicating the second foreground object may initiate an image editing operation targeting the second foreground object.
- FIG. 5 illustrates enhanced user interface (UI) techniques to enable a user to simply and effectively control image editing operations.
- UI user interface
- a user may designate a background portion (or a foreground portion) of an image using the button 534 as described in the example of FIG. 5.
- FIG. 6 a particular illustrative embodiment of a method is depicted and generally designated 600.
- the method 600 may be performed at a device, such as at a mobile device that includes a processor.
- the method 600 is performed by the processor 100 of FIG. 1.
- the method 600 includes receiving image data corresponding to an image, at 604.
- the image data may correspond to the image data 102.
- the image may correspond to an image captured by a camera, and the image data may be loaded in connection with an image editing application to enable editing of the image.
- the method 600 may further include identifying a cluster associated with the image data, at 608.
- the cluster identifier 124 may identify a cluster of the image data 102, such as the cluster 108a.
- the method 600 may further include segmenting the image data by identifying a first image layer of the image based on the cluster, at 612.
- the cluster identifier 124 may provide the cluster identification 126 to the image segment generator 128.
- the image segment generator 128 may segment the image data by identifying a foreground portion of the image.
- the image segment generator 128 may generate the segmentation mask 130 to enable independent modification of image layers of the image.
- the method 600 may further include initiating one or more component labeling operations using the first image layer, at 616.
- the method 600 may further include identifying a second image layer (e.g., a background) of the image, at 620.
- the method 600 may further include prompting a user of the device to adjust a first attribute of the first image layer independently of a second attribute of the second image layer, at 624.
- a user of the device may be prompted to adjust the attribute 116 (e.g., to generate the attribute 140) independently of one or more of the attributes 118a, 120a, and 122a.
- the method 600 may further include receiving user input, at 628.
- the user input is received at a display of the device, such as at a touchscreen.
- the method 600 may further include generating a modified image based on the user input, at 640.
- the modified image may correspond to the modified image data 138 of FIG. 1.
- the method 600 of FIG. 6 enables simplified and efficient control of image editing operations by a user.
- the method 600 may be utilized in connection with portable device (e.g., a mobile device having a touchscreen user interface) while still enabling a high level of user control over image editing operations (e.g., independent adjustment of layers of an image).
- the method 700 may be performed at a device, such as at a mobile device that includes a processor. In a particular illustrative embodiment, the method 700 is performed by the processor 100 of FIG. 1.
- the method 700 includes segmenting image data corresponding to an image into a first image layer and a second image layer, at 708.
- the first image layer and the second image layer may respectively correspond to the image layers 104a, 106a, as illustrative examples.
- the method 700 may further include adjusting a first attribute of the first image layer independently of a second attribute of the second image layer based on user input, at 712.
- the first attribute corresponds to the attribute 116
- the second attribute corresponds to one or more of the attributes 120a, 122a.
- the method 700 facilitates enhanced image editing operations.
- an image editing operation may separately target image layers (e.g., background and foreground) to achieve a different image editing effect on one image layer relative to another image layer.
- image layers e.g., background and foreground
- the mobile device 800 may include one or more processing resources 810.
- the one or more processing resources 810 include the processor 100.
- the one or more processing resources 810 may be coupled to a computer-readable medium, such as to a memory 832 (e.g., a non- transitory computer-readable medium).
- the memory 832 may store instructions 858 executable by the one or more processing resources 810 and data 852 usable by the one or more processing resources 810.
- the memory 832 may further store cluster identifying instructions 892, image segmenting instructions 894, and/or image labeling instructions 896.
- the mobile device 800 may include a camera having an image sensor, such as a charge-coupled device (CCD) image sensor and/or a complementary metal-oxide- semiconductor (CMOS) image sensor.
- CCD charge-coupled device
- CMOS complementary metal-oxide- semiconductor
- FIG. 8 depicts that a camera 856 may be coupled to a camera controller 890.
- the camera controller 890 may be coupled to the one or more processing resources 810.
- the instructions 858 may include an image editing application executable by the processing resources 810 to edit one or more images captured by the camera 856, and the data 852 may include image data corresponding to the one or more images, such as the image data 102.
- FIG. 8 also shows a display controller 826 that is coupled to the one or more processing resources 810 and to a display 828.
- the display may be configured to present a user interface (UI) 872.
- UI user interface
- the display 828 includes a touchscreen, and the UI 872 is responsive to user operations at the touchscreen (e.g., a swipe operation).
- a coder/decoder (CODEC) 834 can also be coupled to the one or more processing resources 810.
- a speaker 836 and a microphone 838 can be coupled to the CODEC 834.
- FIG. 8 also indicates that a wireless controller 840 can be coupled to the one or more processing resources 810.
- the wireless controller 840 may be further coupled to an antenna 842 via a radio frequency (RF) interface 880.
- RF radio frequency
- the one or more processing resources 810, the memory 832, the display controller 826, the camera controller 890, the CODEC 834, and the wireless controller 840 are included in a system-in-package or system-on-chip device 822.
- An input device 830 and a power supply 844 may be coupled to the system-on-chip device 822.
- the display 828, the input device 830, the camera 856, the speaker 836, the microphone 838, the antenna 842, the RF interface 880, and the power supply 844 are external to the system-on-chip device 822.
- each of the display 828, the input device 830, the camera 856, the speaker 836, the microphone 838, the antenna 842, the RF interface 880, and the power supply 844 can be coupled to a component of the system-on-chip device 822, such as to an interface or to a controller.
- a non-transitory computer-readable medium stores instructions.
- the non-transitory computer-readable medium may correspond to the memory 832, and the instructions may include any of the cluster identifying instructions 892, the image segmenting instructions 894, the image labeling instructions 896, and/or the instructions 858.
- the instructions are executable by a processor (e.g., the processor 100) to cause the processor to segment image data associated with an image into a first image layer and a second image layer.
- the image data may correspond to the image data 102, and the first image layer and the second image layer may correspond to the image layers 104a, 106a.
- the instructions are further executable by the processor to adjust a first attribute of the first image layer independently of a second attribute of the second image layer based on user input.
- the first attribute and the second attribute may correspond to the attributes 116, 120a, as an illustrative example.
- an apparatus e.g., the processor 100
- the first image layer and the second image layer may correspond to the image layers 104a, 106a.
- the means for segmenting the image data may correspond to the image segment generator 128, and the image data may
- the apparatus further includes means for adjusting a first attribute of the first image layer independently of a second attribute of the second image layer based on user input.
- the means for adjusting the first attribute may correspond to the image modifier 136.
- the first attribute and the second attribute may correspond to the attributes 116, 120a, as an illustrative example.
- a first operating state of a mobile device 902 is depicted and generally designated 900.
- the mobile device 902 may include the processor 100 (not shown in FIG. 9). Alternatively or in addition, the mobile device may include another processor.
- the mobile device 902 includes a display device 904 and a memory 906.
- the display device 904 may display an image 908 having an attribute 910 (e.g., a color attribute) and further having an attribute 912 (e.g., a blurring attribute).
- the attributes 910, 912 may correspond to a common layer or to separate layers of the image 908.
- the attribute 910 corresponds to the attribute 116 of the image layer 104a
- attribute 912 corresponds to the attribute 120a of the image layer 106a.
- the memory 906 may store image data 914 corresponding to the image 908 and may further store one or more user configuration parameters 916.
- the user configuration parameters 916 may determine how user input received at the mobile device 902 affects one or more of the attributes 910, 912.
- user input 918 may be received at the mobile device 902.
- the user input 918 may substantially indicate a first direction, such as a vertical direction or a horizontal direction relative to the mobile device 902.
- user input may "substantially" have a direction if, depending on the particular device configuration and/or application, the user input would be recognized by a device as indicating the direction.
- a swipe input may not be precisely vertical but may be substantially vertical if a device would recognize the swipe input as indicating the vertical direction.
- a device may be configured such that if a swipe operation has a certain vector component within the direction, then the swipe operation is recognized as indicating the direction.
- user input at a device may be resolved into multiple directional components (e.g., vectors) by the device. The device may compare the multiple directional components to determine a ratio of the multiple directional components.
- the device may determine that the user input indicates the direction. To further illustrate, if the user input is not a straight line (or is not approximately straight), the device may approximate the user input by "fitting" (e.g., interpolating) points associated with the user input to a line according to a technique.
- the technique may include a "minimum mean squared error” (MMSE) technique, as an illustrative example.
- MMSE minimum mean squared error
- the user configuration parameters 916 may indicate that user input indicating the first direction indicates a first image editing operation to be performed on the image 908.
- the user configuration parameters 916 may indicate that user input indicating the first direction indicates a color attribute change operation.
- the user input 918 includes a swipe operation (e.g., a vertical or a horizontal swipe). It should be appreciated that in one or more alternate examples the user input 918 may include another operation, such as a hover operation, a tap operation, a stylus input operation, an infrared (IR)-based operation, a pointing gesture (e.g., in connection with a multi-camera arrangement configured to detect pointing gestures), or another operation, depending on the particular implementation.
- a swipe operation e.g., a vertical or a horizontal swipe.
- the user input 918 may include another operation, such as a hover operation, a tap operation, a stylus input operation, an infrared (IR)-based operation, a pointing gesture (e.g., in connection with a multi-camera arrangement configured to detect pointing gestures), or another operation, depending on the particular implementation.
- IR infrared
- FIG. 9 further indicates a second operating state 902 of the mobile device 902.
- the attribute 910 has been modified based on the user input 918, generating an attribute 922.
- the mobile device 902 may generate modified image data 926 in response to the user input 918.
- User input 928 may be received at the mobile device 902.
- the user input 928 may substantially indicate the first direction.
- the user configuration parameters 916 may indicate that user input identifying the first direction is to cause another image editing operation on the image 908.
- a third operating state 930 of the mobile device 902 indicates that the attribute 912 has been modified based on the user input 928 to generate an attribute 932.
- the attribute 932 may correspond to a blurring of an image layer of the image 908.
- the mobile device 902 may generate modified image data 936 indicating the attribute 932.
- the modified image data 936 may correspond to the modified image data 926 with a "blurring" effect (e.g., after application of a Gaussian blurring technique to the modified image data 926).
- user input indicating a first direction indicates a first image editing operation.
- a horizontal swipe action may indicate a color change operation that targets a particular layer of an image (e.g., a foreground).
- One or more subsequent horizontal swipe actions may "cycle" through different color change operations (e.g., red to blue to green, etc.).
- User input indicating a second direction may indicate a second image editing operation, such as an image editing operation to a different layer of the image.
- a vertical swipe action may select or cause an image blurring operation, such as to a background of the image.
- One or more subsequent vertical swipe actions may select or cause one or more additional image editing operations to the background, such as by replacing the background with predetermined content (e.g., a beach scene) and/or other content.
- swiping in a first direction e.g., vertically
- swiping in a second direction e.g., horizontally
- options e.g., colors, blurring intensities, etc.
- swiping in different directions or along different axes may correspond to different image editing operations (e.g., swiping up/down for color changes, swiping left/right for blurring, swiping diagonally for background scene replacement, etc.).
- the particular directions associated with user input operations may be configured by a user.
- the user configuration parameters 916 may be user-configurable to indicate that a diagonal swipe action is to indicate color change operations (e.g., instead of the horizontal direction) or to indicate image blurring operations (e.g., instead of the vertical direction).
- User configuration of the user configuration parameters 916 is described further with reference to FIG. 11.
- FIG. 9 illustrates simplified control of image editing operations.
- a user of the mobile device 902 may perform multiple image editing operations using a convenient and fast input method (e.g., a swipe action), reducing complexity of image editing operations.
- a convenient and fast input method e.g., a swipe action
- the method 1000 includes displaying a first image at a mobile device, at 1004.
- the mobile device may correspond to the mobile device 902, and the first image may correspond to the image 908.
- the method 1000 further includes receiving first user input at the mobile device, at 1008.
- the first user input indicates a direction relative to the mobile device.
- the first user input may indicate a vertical direction or a horizontal direction.
- the first user input may correspond to the user input 918.
- the method 1000 may further include performing a first image editing operation on the first image based on the first user input to generate a second image, at 1012.
- the first image editing operation may generate the image 924, such as by modifying the attribute 910 to generate the attribute 922.
- the first image editing operation may include modifying a color attribute of the image 908 to generate the image 924.
- the method 1000 may further include displaying the second image at the mobile device, at 1016.
- the image 924 may be displayed at the display device 904 of the mobile device 902.
- the method 1000 may further include receiving second user input at the mobile device, at 1020.
- the second user input indicates the direction.
- the second user input may substantially indicate another direction relative to the mobile device (e.g., a horizontal direction instead of a vertical direction indicated by the first user input, etc.).
- the second user input may correspond to the user input 928.
- the method 1000 may further include performing a second image editing operation on the second image to generate a third image, at 1024.
- the second image editing operation may modify the attribute 912 to generate the attribute 932, such as by blurring a layer of the image 924.
- the third image may correspond to the image 934.
- the method 1000 may optionally include receiving third user input indicating the direction relative to the mobile device.
- the third user input corresponds to a command to undo the first image editing operation and the second image editing operation.
- the user may "repeat" the user input (e.g., a swipe operation substantially in a particular direction) to "undo" the first image editing operation and the second image editing operation.
- the method 1000 illustrates simplified control of image editing operations.
- a user of a mobile device may perform multiple image editing operations using a particular input method (e.g., a swipe action), reducing complexity of image editing operations.
- a user may reconfigure user configuration parameters, such as to adjust an order of image editing operations.
- FIG. 11 depicts a particular illustrative embodiment of the mobile device 902.
- the mobile device 902 may include one or more processing resources 1110 (e.g., a processor, such as the processor 100, another processor, or a combination thereof).
- the one or more processing resources 1110 may be coupled to a computer-readable medium, such as to the memory 906 (e.g., a non-transitory computer-readable medium).
- the memory 906 may store instructions 1158 executable by the one or more processing resources 1110 and data 1152 usable by the one or more processing resources 1110.
- the memory 906 may store the image data 914 and the user configuration parameters 916.
- the mobile device 902 may include a camera having an image sensor, such as a charge-coupled device (CCD) image sensor and/or a complementary metal-oxide- semiconductor (CMOS) image sensor.
- CCD charge-coupled device
- CMOS complementary metal-oxide- semiconductor
- FIG. 11 depicts that a camera 1156 may be coupled to a camera controller 1190.
- the camera controller 1190 may be coupled to the one or more processing resources 1110.
- the image data 914 may correspond to an image captured by the camera 1156.
- FIG. 11 also shows a display controller 1126 that is coupled to the one or more processing resources 1110 and to the display device 904.
- the display device 904 may be configured to present a user interface (UI) 1172.
- UI user interface
- the display device 904 includes a touchscreen, and the UI 1172 is responsive to user operations at the touchscreen (e.g., a swipe operation).
- a coder/decoder (CODEC) 1134 can also be coupled to the one or more processing resources 1110.
- a speaker 1136 and a microphone 1138 can be coupled to the CODEC 1134.
- FIG. 11 also indicates that a wireless controller 1140 can be coupled to the one or more processing resources 1110.
- the wireless controller 1140 may be further coupled to an antenna 1142 via a radio frequency (RF) interface 1180.
- RF radio frequency
- the one or more processing resources 1110, the memory 906, the display controller 1126, the camera controller 1190, the CODEC 1134, and the wireless controller 1140 are included in a system-in-package or system-on-chip device 1122.
- An input device 1130 and a power supply 1144 may be coupled to the system-on-chip device 1122.
- the display device 904, the input device 1130, the camera 1156, the speaker 1136, the microphone 1138, the antenna 1142, the RF interface 1180, and the power supply 1144 are external to the system-on-chip device 1122.
- each of the display device 904, the input device 1130, the camera 1156, the speaker 1136, the microphone 1138, the antenna 1142, the RF interface 1180, and the power supply 1144 can be coupled to a component of the system-on-chip device 1122, such as to an interface or to a controller.
- user preference input 1192 may be received at the mobile device 902.
- the user preference input 1192 may adjust the user configuration parameters.
- the user preference input 1192 may be received at the display device 904 (e.g., at a touchscreen of the display device 904), at the input device 1130 (e.g., at a keyboard of the input device 1130), or a combination thereof.
- the user preference input 1192 may reconfigure an order of image editing operations performed at the mobile device 902.
- the user preference input 1192 may reconfigure the user configuration parameters 916 to indicate that color change operations are to precede image blurring operations, as an illustrative example.
- the user preference input 1192 may reconfigure the user configuration parameters 916 from a first state to a second state.
- the first state may indicate that initial user input (e.g., the user input 918 of FIG. 9) is to initiate a color change operation and the subsequent user input (e.g., the user input 928 of FIG. 9) is to initiate an image blurring operation.
- initial user input e.g., the user input 918 of FIG. 9
- subsequent user input e.g., the user input 928 of FIG. 9
- FIG. 11 enable simplified control of a user interface (UI) by a user of a mobile device.
- the UI may enable the user to set user configuration parameters that assign certain image editing operations to a particular user input (e.g., a swipe in a particular direction), which may simplify user control of an image editing application executed by the mobile device.
- the instructions 1158 may be executable by the one or more processing resources 1110 to perform one or more operations described herein.
- a computer-readable medium e.g., the memory 906 stores instructions (e.g., the instructions 1158) that are executable by a processor (e.g., the one or more processing resources 1110) to cause a mobile device (e.g., the mobile device 902) to display a first image (e.g., the image 908) at the mobile device and to receive first user input (e.g., the user input 918) at the mobile device.
- the first user input indicates a direction relative to the mobile device.
- the instructions are further executable by the processor to perform, based on the first user input, a first image editing operation on the first image to generate a second image (e.g., the image 924), to display the second image at the mobile device, and to receive second user input (e.g., the user input 928) at the mobile device.
- the second user input indicates the direction relative to the mobile device.
- the instructions are further executable by the processor to perform, based on the second user input, a second image editing operation on the second image to generate a third image (e.g., the image 934).
- an apparatus includes means for displaying (e.g., the display device 904) a first image (e.g., the image 908) at a mobile device (e.g., the mobile device 902) and means for receiving (e.g., the display device 904 and/or the input device 1130) first user input (e.g., the user input 918) at the mobile device.
- the first user input indicates a direction relative to the mobile device.
- the apparatus further includes means for performing a first image editing operation on the first image (e.g., the one or more processing resources 1110) to generate a second image (e.g., the image 924) based on the first user input, means for causing cause the mobile device to display the second image (e.g., the display device 904 and/or the input device 1130), and means for receiving (e.g., the display device 904 and/or the input device 1130) second user input (e.g., the user input 928).
- the second user input indicates the direction relative to the mobile device.
- the apparatus further includes means for performing a second image editing operation (e.g., the one or more processing resources 1110) on the second image to generate a third image (e.g., the image 934) based on the second user input.
- the method 1200 may be performed by a processor, such as the processor 100 and/or any of the processing resources 810, 1110.
- the method 1200 may be performed at a device, such as a mobile device (e.g., one or more of the mobile devices 800, 902).
- the method 1200 includes receiving first user input from a user interface, at 1204.
- the first user input may correspond to the user input 504, and the user interface may correspond to any of the UIs 500, 872, and 1172.
- the first user input indicates an image for a display operation.
- the image may correspond to the image 502.
- the first user input may correspond to a touchscreen operation that selects the image from an image gallery that is presented at the user interface.
- the first user input may correspond to a request to enlarge the image at the user interface from a "thumbnail" view to a "full” view.
- the method 1200 further includes performing the display operation and automatically initiating a clustering operation using image data corresponding to the image based on the first user input, at 1208.
- the clustering operation may be performed concurrently with the image being "loaded" at a mobile device. Loading the image may include enlarging the image (e.g., from a thumbnail view to a full view) or launching an image editing application, as illustrative examples.
- the clustering operation may include a SLIC operation. The clustering operation may be initiated to identify clusters within the image data while the display operation is performed to enlarge the image from a thumbnail view to a full view.
- the method 1200 may further include receiving second user input from the user interface, at 1216.
- the second user input may correspond to the user input 918.
- the second user input identifies a first image layer of the image.
- the first image layer may correspond to the image layer 104a.
- the second user input may identify a foreground of the image using a swipe action at a touchscreen device.
- the second user input may indicate an image editing operation targeting the foreground (e.g., a color change operation, an image blur operation, etc.).
- the method 1200 may further include automatically initiating an image segmenting operation associated with the first image layer, at 1220.
- the image segmenting operation may be initiated automatically upon completion of the second user input (e.g., completion of a swipe action).
- the image segmenting operation may be initiated automatically upon receiving user input identifying a background of the image.
- the method 1200 may further include performing an image component labeling operation, at 1222.
- the image component labeling operation may be initiated after completing the image segmenting operation.
- the method 1200 may further include receiving third user input from the user interface, at 1224.
- the third user input identifies a second image layer of the image.
- the third user input may identify a background of the image using a swipe action at a touchscreen device.
- the background may correspond to the second image layer.
- the third user input may correspond to the user input 928.
- the third user input may indicate an image editing operation targeting the background (e.g., a color change operation, an image blur operation, etc.).
- the method 1200 may further include modifying the image based on the third user input to generate a modified image, at 1228.
- the modified image may include a foreground and a background that are modified based on the second user input and the third user input, respectively.
- the method 1200 facilitates an enhanced image editing experience for a user. For example, by "hiding" a lag associated with an image clustering operation during loading of an image, an image editing operation may appear faster to a user (since for example clustering of the image may be completed prior to the user directly initiating the image editing operation through user input indicating an image editing operation to be performed on an image layer of the image). Further, the image segmenting operation may be initiated prior to receiving user input related to image processing operations of all image layers of the image.
- the image segmenting operation may be initiated as soon as user input related to a first image layer (e.g., a foreground) is received and without (or prior to) user input related to a second image layer (e.g., a background) being received.
- image segmenting and component labeling operations are performed with respect to the first image layer while a user performs a swipe action to indicate an image editing operation associated with the second image layer, enhancing responsiveness and speed of image editing operations and improving a user experience.
- a computer-readable medium may store instructions that are executable by a processor to cause the processor to receive first user input from a user interface.
- the computer-readable medium may correspond to one or more of the memories 832, 906, and the processor may correspond to the processor 100 and/or any of the processing resources 810, 1110.
- the user interface may correspond to any of the UIs 500, 872, and 1172.
- the first user input selects an image for a display operation.
- the image may correspond to the image 502, and the first user input may correspond to the user input 504.
- the instructions are further executable by the processor to perform the display operation based on the first user input and to automatically initiate a clustering operation using image data corresponding to the image based on the first user input.
- the image data may correspond to the image data 102.
- an apparatus e.g., any of the mobile devices 800, 902 includes means for receiving first user input from a user interface.
- the means for receiving the first user input may correspond to the display 828 and/or the display device 904.
- the user interface may correspond to any of the UIs 500, 872, and 1172.
- the first user input selects an image for a display operation.
- the image may correspond to the image 502, as an illustrative example.
- the apparatus further includes means for performing the display operation based on the first user input (e.g., the display 828 and/or the display device 904) and means for automatically initiating a clustering operation (e.g., the processor 100 and/or any of the processing resources 810, 1110) using image data (e.g., the image data 102) corresponding to the image based on the first user input.
- means for performing the display operation based on the first user input e.g., the display 828 and/or the display device 904
- means for automatically initiating a clustering operation e.g., the processor 100 and/or any of the processing resources 810, 1110 using image data (e.g., the image data 102) corresponding to the image based on the first user input.
- a software module may reside in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, hard disk, a removable disk, a compact disc read-only memory (CD-ROM), or any other form of storage medium known in the art.
- An exemplary non-transitory medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium.
- the storage medium may be integral to the processor.
- the processor and the storage medium may reside in an application-specific integrated circuit (ASIC) and/or a field programmable gate array (FPGA) chip.
- ASIC application-specific integrated circuit
- FPGA field programmable gate array
- the ASIC and/or FPGA chip may reside in a computing device or a user terminal.
- the processor and the storage medium may reside as discrete components in a computing device or user terminal.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/111,175 US10026206B2 (en) | 2014-02-19 | 2014-02-24 | Image editing techniques for a device |
EP14882881.7A EP3108379B1 (en) | 2014-02-19 | 2014-02-24 | Image editing techniques for a device |
JP2016551791A JP6355746B2 (en) | 2014-02-19 | 2014-02-24 | Image editing techniques for devices |
CN201480074666.XA CN105940392B (en) | 2014-02-19 | 2014-02-24 | The image-editing technology of device |
KR1020167023875A KR101952569B1 (en) | 2014-02-19 | 2014-02-24 | Image editing techniques for a device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201461941996P | 2014-02-19 | 2014-02-19 | |
US61/941,996 | 2014-02-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015123792A1 true WO2015123792A1 (en) | 2015-08-27 |
Family
ID=53877493
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/000172 WO2015123792A1 (en) | 2014-02-19 | 2014-02-24 | Image editing techniques for a device |
Country Status (6)
Country | Link |
---|---|
US (1) | US10026206B2 (en) |
EP (1) | EP3108379B1 (en) |
JP (1) | JP6355746B2 (en) |
KR (1) | KR101952569B1 (en) |
CN (1) | CN105940392B (en) |
WO (1) | WO2015123792A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106506935A (en) * | 2015-09-08 | 2017-03-15 | Lg电子株式会社 | Mobile terminal and its control method |
WO2017105082A1 (en) * | 2015-12-16 | 2017-06-22 | 남기원 | Social-based image setting value sharing system and method therefor |
JP2018056792A (en) * | 2016-09-29 | 2018-04-05 | アイシン精機株式会社 | Image display controller |
EP4121949A4 (en) * | 2020-03-16 | 2024-04-03 | Snap Inc. | 3d cutout image modification |
US11995757B2 (en) | 2021-10-29 | 2024-05-28 | Snap Inc. | Customized animation from video |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10268698B2 (en) * | 2014-11-21 | 2019-04-23 | Adobe Inc. | Synchronizing different representations of content |
US10270965B2 (en) * | 2015-12-04 | 2019-04-23 | Ebay Inc. | Automatic guided capturing and presentation of images |
US10810744B2 (en) * | 2016-05-27 | 2020-10-20 | Rakuten, Inc. | Image processing device, image processing method and image processing program |
US10325372B2 (en) * | 2016-12-20 | 2019-06-18 | Amazon Technologies, Inc. | Intelligent auto-cropping of images |
JP6434568B2 (en) * | 2017-05-18 | 2018-12-05 | 楽天株式会社 | Image processing apparatus, image processing method, and program |
KR101961015B1 (en) * | 2017-05-30 | 2019-03-21 | 배재대학교 산학협력단 | Smart augmented reality service system and method based on virtual studio |
EP3567548B1 (en) * | 2018-05-09 | 2020-06-24 | Siemens Healthcare GmbH | Medical image segmentation |
US11138699B2 (en) | 2019-06-13 | 2021-10-05 | Adobe Inc. | Utilizing context-aware sensors and multi-dimensional gesture inputs to efficiently generate enhanced digital images |
EP4032062A4 (en) | 2019-10-25 | 2022-12-14 | Samsung Electronics Co., Ltd. | Image processing method, apparatus, electronic device and computer readable storage medium |
EP4085422A4 (en) * | 2019-12-31 | 2023-10-18 | Qualcomm Incorporated | Methods and apparatus to facilitate region of interest tracking for in-motion frames |
US11069044B1 (en) * | 2020-03-18 | 2021-07-20 | Adobe Inc. | Eliminating image artifacts using image-layer snapshots |
CN112860163B (en) * | 2021-01-21 | 2022-11-11 | 维沃移动通信(深圳)有限公司 | Image editing method and device |
US20230252687A1 (en) * | 2022-02-10 | 2023-08-10 | Qualcomm Incorporated | Systems and methods for facial attribute manipulation |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040008886A1 (en) * | 2002-07-02 | 2004-01-15 | Yuri Boykov | Using graph cuts for editing photographs |
US20070147700A1 (en) * | 2005-12-28 | 2007-06-28 | Samsung Electronics Co., Ltd | Method and apparatus for editing images using contour-extracting algorithm |
EP1826723A1 (en) | 2006-02-28 | 2007-08-29 | Microsoft Corporation | Object-level image editing |
US20090252429A1 (en) | 2008-04-03 | 2009-10-08 | Dan Prochazka | System and method for displaying results of an image processing system that has multiple results to allow selection for subsequent image processing |
CN101976194A (en) * | 2010-10-29 | 2011-02-16 | 中兴通讯股份有限公司 | Method and device for setting user interface |
US20120148151A1 (en) | 2010-12-10 | 2012-06-14 | Casio Computer Co., Ltd. | Image processing apparatus, image processing method, and storage medium |
CN102592268A (en) * | 2012-01-06 | 2012-07-18 | 清华大学深圳研究生院 | Method for segmenting foreground image |
CN103152521A (en) * | 2013-01-30 | 2013-06-12 | 广东欧珀移动通信有限公司 | Effect of depth of field achieving method for mobile terminal and mobile terminal |
US20140029868A1 (en) | 2008-06-25 | 2014-01-30 | Jon Lorenz | Image layer stack interface |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6973212B2 (en) | 2000-09-01 | 2005-12-06 | Siemens Corporate Research, Inc. | Graph cuts for binary segmentation of n-dimensional images from object and background seeds |
JP2004246460A (en) * | 2003-02-12 | 2004-09-02 | Daihatsu Motor Co Ltd | Computer graphics device and design program |
US7593020B2 (en) | 2006-05-30 | 2009-09-22 | Microsoft Corporation | Image editing using image-wide matting |
US8644600B2 (en) | 2007-06-05 | 2014-02-04 | Microsoft Corporation | Learning object cutout from a single example |
CN101802867B (en) * | 2007-07-18 | 2012-11-21 | 新加坡南洋理工大学 | Methods of providing graphics data and displaying graphics data |
JP2009037282A (en) * | 2007-07-31 | 2009-02-19 | Sharp Corp | Image browsing device |
US7995841B2 (en) | 2007-09-24 | 2011-08-09 | Microsoft Corporation | Hybrid graph model for unsupervised object segmentation |
US8259208B2 (en) * | 2008-04-15 | 2012-09-04 | Sony Corporation | Method and apparatus for performing touch-based adjustments within imaging devices |
WO2011085248A1 (en) * | 2010-01-07 | 2011-07-14 | Swakker, Llc | Methods and apparatus for modifying a multimedia object within an instant messaging session at a mobile communication device |
US8823726B2 (en) * | 2011-02-16 | 2014-09-02 | Apple Inc. | Color balance |
US9153031B2 (en) * | 2011-06-22 | 2015-10-06 | Microsoft Technology Licensing, Llc | Modifying video regions using mobile device input |
US8873813B2 (en) * | 2012-09-17 | 2014-10-28 | Z Advanced Computing, Inc. | Application of Z-webs and Z-factors to analytics, search engine, learning, recognition, natural language, and other utilities |
KR101792641B1 (en) | 2011-10-07 | 2017-11-02 | 엘지전자 주식회사 | Mobile terminal and out-focusing image generating method thereof |
US9041727B2 (en) | 2012-03-06 | 2015-05-26 | Apple Inc. | User interface tools for selectively applying effects to image |
CN103548056B (en) * | 2012-03-26 | 2017-02-22 | 松下电器(美国)知识产权公司 | Image-processing device, image-capturing device, and image-processing method |
TWI543582B (en) | 2012-04-17 | 2016-07-21 | 晨星半導體股份有限公司 | Image editing method and a related blur parameter establishing method |
US9285971B2 (en) | 2012-06-20 | 2016-03-15 | Google Inc. | Compartmentalized image editing system |
CN103885623A (en) * | 2012-12-24 | 2014-06-25 | 腾讯科技(深圳)有限公司 | Mobile terminal, system and method for processing sliding event into editing gesture |
CN103294362A (en) * | 2013-06-28 | 2013-09-11 | 贝壳网际(北京)安全技术有限公司 | Screen display control method and device for mobile equipment and mobile equipment |
US9697820B2 (en) * | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US9965865B1 (en) * | 2017-03-29 | 2018-05-08 | Amazon Technologies, Inc. | Image data segmentation using depth data |
-
2014
- 2014-02-24 JP JP2016551791A patent/JP6355746B2/en active Active
- 2014-02-24 EP EP14882881.7A patent/EP3108379B1/en active Active
- 2014-02-24 CN CN201480074666.XA patent/CN105940392B/en not_active Expired - Fee Related
- 2014-02-24 US US15/111,175 patent/US10026206B2/en not_active Expired - Fee Related
- 2014-02-24 KR KR1020167023875A patent/KR101952569B1/en active IP Right Grant
- 2014-02-24 WO PCT/CN2014/000172 patent/WO2015123792A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040008886A1 (en) * | 2002-07-02 | 2004-01-15 | Yuri Boykov | Using graph cuts for editing photographs |
US20070147700A1 (en) * | 2005-12-28 | 2007-06-28 | Samsung Electronics Co., Ltd | Method and apparatus for editing images using contour-extracting algorithm |
EP1826723A1 (en) | 2006-02-28 | 2007-08-29 | Microsoft Corporation | Object-level image editing |
CN101390090A (en) * | 2006-02-28 | 2009-03-18 | 微软公司 | Object-level image editing |
US20090252429A1 (en) | 2008-04-03 | 2009-10-08 | Dan Prochazka | System and method for displaying results of an image processing system that has multiple results to allow selection for subsequent image processing |
US20140029868A1 (en) | 2008-06-25 | 2014-01-30 | Jon Lorenz | Image layer stack interface |
CN101976194A (en) * | 2010-10-29 | 2011-02-16 | 中兴通讯股份有限公司 | Method and device for setting user interface |
US20120148151A1 (en) | 2010-12-10 | 2012-06-14 | Casio Computer Co., Ltd. | Image processing apparatus, image processing method, and storage medium |
CN102592268A (en) * | 2012-01-06 | 2012-07-18 | 清华大学深圳研究生院 | Method for segmenting foreground image |
CN103152521A (en) * | 2013-01-30 | 2013-06-12 | 广东欧珀移动通信有限公司 | Effect of depth of field achieving method for mobile terminal and mobile terminal |
Non-Patent Citations (4)
Title |
---|
BARRETT W A ET AL.: "Object-based image editing", COMPUTER GRAPHICS PROCEEDINGS, PROCEEDINGS OF SIGGRAPH ANNUAL INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES, 1 January 2002 (2002-01-01) |
CHAI-KAI LIANG ET AL.: "Computer Graphics Forum: Journal of the European Association for Computer Graphics", vol. 29, 7 June 2010, WILEY-BLACKWELL, article "TouchTone: Interactive Local Image Adjustment Using Point-and-Swipe", pages: 253 - 261 |
FORSYTH D A ET AL.: "Computer Vision - a modern approach", SEGMENTATION BY CLUSTERING, 1 January 2003 (2003-01-01) |
See also references of EP3108379A4 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106506935A (en) * | 2015-09-08 | 2017-03-15 | Lg电子株式会社 | Mobile terminal and its control method |
EP3141993A3 (en) * | 2015-09-08 | 2017-05-03 | Lg Electronics Inc. | Mobile terminal and method for controlling the same |
US10021294B2 (en) | 2015-09-08 | 2018-07-10 | Lg Electronics | Mobile terminal for providing partial attribute changes of camera preview image and method for controlling the same |
CN106506935B (en) * | 2015-09-08 | 2021-03-05 | Lg电子株式会社 | Mobile terminal and control method thereof |
WO2017105082A1 (en) * | 2015-12-16 | 2017-06-22 | 남기원 | Social-based image setting value sharing system and method therefor |
JP2018056792A (en) * | 2016-09-29 | 2018-04-05 | アイシン精機株式会社 | Image display controller |
EP4121949A4 (en) * | 2020-03-16 | 2024-04-03 | Snap Inc. | 3d cutout image modification |
US11995757B2 (en) | 2021-10-29 | 2024-05-28 | Snap Inc. | Customized animation from video |
Also Published As
Publication number | Publication date |
---|---|
KR20160124129A (en) | 2016-10-26 |
EP3108379A1 (en) | 2016-12-28 |
US10026206B2 (en) | 2018-07-17 |
CN105940392A (en) | 2016-09-14 |
CN105940392B (en) | 2019-09-27 |
US20160335789A1 (en) | 2016-11-17 |
EP3108379A4 (en) | 2018-01-17 |
EP3108379B1 (en) | 2023-06-21 |
KR101952569B1 (en) | 2019-02-27 |
JP2017512335A (en) | 2017-05-18 |
JP6355746B2 (en) | 2018-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10026206B2 (en) | Image editing techniques for a device | |
JP6730690B2 (en) | Dynamic generation of scene images based on the removal of unwanted objects present in the scene | |
US9589595B2 (en) | Selection and tracking of objects for display partitioning and clustering of video frames | |
EP4154511A1 (en) | Maintaining fixed sizes for target objects in frames | |
CN112954210B (en) | Photographing method and device, electronic equipment and medium | |
TWI543610B (en) | Electronic device and image selection method thereof | |
CN104486552B (en) | A kind of method and electronic equipment obtaining image | |
CN113508416B (en) | Image fusion processing module | |
CN108830892B (en) | Face image processing method and device, electronic equipment and computer readable storage medium | |
US20130169760A1 (en) | Image Enhancement Methods And Systems | |
CN109903291B (en) | Image processing method and related device | |
CN107230182A (en) | A kind of processing method of image, device and storage medium | |
US10621730B2 (en) | Missing feet recovery of a human object from an image sequence based on ground plane detection | |
US20170039683A1 (en) | Image processing apparatus, image processing method, image processing system, and non-transitory computer readable medium | |
CN110266955B (en) | Image processing method, image processing apparatus, electronic device, and storage medium | |
CN108830219A (en) | Method for tracking target, device and storage medium based on human-computer interaction | |
CN114612283A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN109495778B (en) | Film editing method, device and system | |
KR102372711B1 (en) | Image photographing apparatus and control method thereof | |
CN111428551A (en) | Density detection method, density detection model training method and device | |
CN110070478B (en) | Deformation image generation method and device | |
US20230325980A1 (en) | Electronic device and image processing method thereof | |
WO2023193648A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
WO2022105757A1 (en) | Image processing method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14882881 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 15111175 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2014882881 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014882881 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2016551791 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20167023875 Country of ref document: KR Kind code of ref document: A |