GB2554668A - Image manipulation - Google Patents

Image manipulation Download PDF

Info

Publication number
GB2554668A
GB2554668A GB1616720.7A GB201616720A GB2554668A GB 2554668 A GB2554668 A GB 2554668A GB 201616720 A GB201616720 A GB 201616720A GB 2554668 A GB2554668 A GB 2554668A
Authority
GB
United Kingdom
Prior art keywords
image
display
gesture type
touch
touch input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1616720.7A
Other versions
GB201616720D0 (en
GB2554668B (en
Inventor
Chesnokov Viacheslav
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apical Ltd
Original Assignee
Apical Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apical Ltd filed Critical Apical Ltd
Priority to GB1616720.7A priority Critical patent/GB2554668B/en
Publication of GB201616720D0 publication Critical patent/GB201616720D0/en
Priority to US15/717,134 priority patent/US11307746B2/en
Publication of GB2554668A publication Critical patent/GB2554668A/en
Application granted granted Critical
Publication of GB2554668B publication Critical patent/GB2554668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]

Abstract

A method of image manipulation includes displaying an image on a first area 104 of a touch-sensitive electronic display and receiving touch input on a second area of the touchscreen, comprising the first area. A gesture 114 is detected from the touch input. A first gesture type comprises a larger component of motion along a first axis 116 than a second, orthogonal axis 118. A second gesture category comprises a larger component of motion along the second axis than the first axis. If the gesture is of the first type, a display characteristic (e.g. tone mapping strength, brightness, gamma correction strength, or saturation) of the image is adjusted, while displaying the image. If the gesture belongs to the second gesture type, the display ceases to display the image and displays the next or previous image. Detected gestures may be interpreted differently depending on whether the display is zoomed-in or not.

Description

(71) Applicant(s):
Apical Ltd (Incorporated in the United Kingdom)
110 Fulbourn Road, Cambridge, Cambridgeshire, CB1 9NJ, United Kingdom (56) Documents Cited:
US 20160062571 A1 US 20110239155 A1 US 20100293500 A1
US 20130238724 A1 US 20110163971 A1 US 20080052945 A1 (58) Field of Search:
INT CL G06F
Other: EPODOC, WPI, TXTA (72) Inventor(s):
Viacheslav Chesnokov (74) Agent and/or Address for Service:
EIP
Fairfax House, 15 Fulwood Place, LONDON, WC1V 6HU, United Kingdom (54) Title of the Invention: Image manipulation
Abstract Title: Image manipulation using directional gestures (57) A method of image manipulation includes displaying an image on a first area 104 of a touch-sensitive electronic display and receiving touch input on a second area of the touchscreen, comprising the first area. A gesture 114 is detected from the touch input. A first gesture type comprises a larger component of motion along a first axis 116 than a second, orthogonal axis 118. A second gesture category comprises a larger component of motion along the second axis than the first axis. If the gesture is of the first type, a display characteristic (e.g. tone mapping strength, brightness, gamma correction strength, or saturation) of the image is adjusted, while displaying the image. If the gesture belongs to the second gesture type, the display ceases to display the image and displays the next or previous image. Detected gestures may be interpreted differently depending on whether the display is zoomed-in or not.
Figure GB2554668A_D0001
FIG. 2b /5
Figure GB2554668A_D0002
FIG. 1
2/5
Figure GB2554668A_D0003
Figure GB2554668A_D0004
FIG. 2a FIG. 2b
Figure GB2554668A_D0005
FIG. 3a
FIG. 3b
3/5
100
Figure GB2554668A_D0006
102
104 ^-200
Figure GB2554668A_D0007
FIG. 4a
Figure GB2554668A_D0008
102
Figure GB2554668A_D0009
CD
FIG. 4b
Figure GB2554668A_D0010
FIG. 5a
FIG. 5b
4/5
Figure GB2554668A_D0011
FIG. 6a FIG. 6b <300
Figure GB2554668A_D0012
FIG. 7
5/5
Figure GB2554668A_D0013
FIG. 8
IMAGE MANIPULATION
Technical Field [0001] The present invention relates to a method and a computing system for manipulating an image.
Background [0002] A software application is known that allows a captured image to be processed after capture. For example, the image can be modified as desired by a user and a modified version of the image can be saved for future use.
[0003] It is desirable to provide a method of manipulating an image that is intuitive and more flexible than the method of the known software application.
Brief Description of the Drawings [0004] Various features of the present disclosure will be apparent from the detailed description which follows, taken in conjunction with the accompanying drawings, which together illustrate, by way of example only, features of the present disclosure, and wherein: [0005] FIG. lisa flow diagram illustrating a method according to examples;
[0006] FIGS. 2a and 2b illustrate schematically an example of a first gesture type;
[0007] FIGS. 3a and 3b illustrate schematically an example of a second gesture type;
[0008] FIGS. 4a and 4b illustrate schematically an example of a third gesture type;
[0009] FIGS. 5a and 5b illustrate schematically an example of a fourth gesture type;
[0010] FIGS. 6a and 6b illustrate schematically a further example of the third gesture type; [0011] FIG. 7 illustrates schematically a further example of the first gesture type; and [0012] FIG. 8 is a schematic diagram showing an example of internal components of a computing system.
Detailed Description [0013] Details of the method according to examples will become apparent from the following description, with reference to the FIGS. In this description, for the purpose of explanation, numerous specific details of certain examples are set forth. Reference in the specification to an example or similar language means that a particular feature, structure, or characteristic described in connection with the example is included in at least that one example, but not necessarily in other examples. It should further be noted that certain examples are described schematically with certain features omitted and/or necessarily simplified for ease of explanation and understanding of the concepts underlying the examples. For example, in certain cases, a description of conventional features is simplified or omitted in order to provide a concise explanation of the method according to examples.
[0014] Examples described herein provide a method of manipulating an image, which may for example be implemented using a computing device or computing system. The image may be the entire or whole image or a portion, part or subset of a larger image. The image is for example an image from a web page accessed by a browser of the computing device, such as a browser of a smartphone; an image captured by an image capture device, such as a camera, of the computing device; or an image downloaded to or stored in storage of the computing device. The image may include any graphical or visual content, for example text, graphics, pictures, and/or photographs. The image may be represented by image data in any suitable format. Common formats include the IPEG (loint Photographic Experts Group, ISO/IEC 10918) format, which is typically an 8-bit format, or the IPEG XT (ISO/IEC 18477) format, which is typically a more than 8-bit format.
[0015] FIG. 1 is a flow diagram illustrating the method according to examples. The method of FIG. 1 includes displaying an image on a first area of a touch-sensitive electronic display. The touch-sensitive electronic display is for example a display device of or coupled to a computing device, which is capable of receiving input via a user’s touch on the display itself. For example, the touch-sensitive electronic display may be the screen of a smartphone. The touch-sensitive electronic display has a first axis and a second axis which is orthogonal to the first axis.
[0016] The method of FIG. 1 further includes receiving touch input on a second area of the touch-sensitive electronic display. The second area includes the first area. The touch input is for example a touch of a body part of a user, such as one or more fingers or a hand, or a touch of an implement such as a stylus.
[0017] In the example of FIG. 1, the method includes detecting, from the touch input, a gesture type. A gesture type for example corresponds with a predetermined movement, motion, location or position of the touch input. The gesture type is one of a plurality of detectable gesture types including a first gesture type and a second gesture type. Detecting the first gesture type includes detecting a larger component of motion of the touch input along one of the first and second axes of the touch-sensitive electronic display than along the other of the first and second axes of the touch-sensitive electronic display. Detecting the second gesture type includes detecting a larger component of motion of the touch input along the other of the first and second axes of the touch-sensitive electronic display than along the one of the first and second axes of the touch-sensitive electronic display.
[0018] If the detected gesture type is the first gesture type, the method of FIG. 1 includes adjusting, during the displaying the image, a display characteristic of the image in dependence on at least one detected characteristic of the motion of the touch input. If the detected gesture type is the second gesture type, the method of FIG. 1 includes ceasing to display the image on the touch-sensitive electronic display; and displaying a further image on the touch-sensitive electronic display.
[0019] Example methods such as the method of FIG. 1 therefore allow the display characteristic of the image to be changed using a touch input corresponding to the first gesture type, which is for example different from the second gesture type. These methods therefore provide the user with a straightforward and intuitive way to both alter a display characteristic of the image and to switch between displaying different images on the touch-sensitive electronic display. For example, the user can interact with the touch-sensitive electronic display using a touch input corresponding to the first gesture type to alter the display characteristic of the image and can use a touch input corresponding to the second gesture type to move the image off screen and display a different image instead. The user therefore has multiple options for manipulating the image, for example by altering properties of the image itself or by ceasing to display the image and displaying a further image.
[0020] The display characteristic of the image can be changed while the image is displayed, without for example saving a modified copy of the image. In this way, by using the first gesture type, the user can flexibly alter the display characteristic of the image in real time, for example without having to save and re-load the image. This can allow the image to be manipulated more straightforwardly than known methods that involve saving a copy of a modified image. The method according to examples can therefore improve a viewing experience for a user, as the user can adjust the display characteristic of the image at will. For example, a user can adjust the display characteristic as needed if the user moves from a high brightness location, e.g. outside, in sunlight, to a low brightness location, e.g. in a dark room. The method also allows different users to adjust the image differently depending on their own preferences. For example, a first user may adjust the display characteristic to a particular level that he or she considers to represent an optimal or desired level, and then a second user of the same computing device may further adjust the display characteristic to a level that suits him or her, merely by using a touch input corresponding to the first gesture type.
[0021] FIGS. 2a and 2b illustrate schematically an example of the first gesture type. FIGS. 2a and 2b each show a computing device 100, which in this example is a smartphone. Internal components of an example computing device such as the computing device 100 of FIGS. 2a and 2b are described in further detail with reference to FIG. 8 below.
[0022] The smartphone 100 has a touch-sensitive electronic display 102. The touch-sensitive electronic display 102 has a first area 104. An image is displayed on the first area 104. In the example of FIGS. 2a and 2b, the image includes an image of a star 106. The first area 104 may correspond with an extent of an image being displayed by the touch-sensitive electronic display; for example, a boundary or border of the first area 104 may correspond with the boundary or border of the image as displayed. FIGS. 2a and 2b show such an example; in FIGS. 2a and 2b, the outer edge or extremity of the image is aligned with the edge of the first area 104, such that the image completely fills the first area 104. Alternatively, the first area 104 may be larger or smaller than the image.
[0023] The touch-sensitive electronic display 102 in examples such as that of FIGS. 2a and 2b also includes a second area, on which touch input can be received. For example, the second area may correspond with an area of the touch-sensitive electronic display that is responsive to a touch input, such as an area of the touch-sensitive electronic display in which a touchscreen is present.
[0024] In examples such as that of FIGS. 2a and 2b, the second area is coincident with the first area 104. For example, the second area may be the same as the first area. In other examples, however, the first area may be smaller than the second area, for example where the extent of the first area corresponds with the extent of the image and where the second area is an entire or whole touch-sensitive area of the smartphone display screen, which includes the first area. For example, the part of the second area not overlapped by the first area may correspond with or include a border area, which may partly or fully surround the first area. Such a border area may be a plain border area, for example including a plain or neutral colour such as black or grey, or the border area may contain other images or icons such as icons to interact with the computing device, e.g. a home icon to go back to a home screen, or a back icon to revert to a previously used application.
[0025] FIG. 2a shows a touch input 108 on the second area of the touch-sensitive electronic display 102 (which in this example corresponds with the first area 104). The touch input 108 in this example corresponds with pressure applied to the touch-sensitive electronic display 102 in the circled region labelled 108, which is for example applied by a finger of a user.
[0026] FIG. 2b shows movement of the touch input 108 from a first location 110 on the second area to a second location 112 on the second area along a path 114. In the example of FIG. 2b, the path 114 is substantially vertical (for example within 5 or 10 degrees of the vertical), although in other examples the path, first and/or second locations may be different from those of FIG. 2b. The touch-sensitive electronic display 102 of FIG. 2b has a first axis 116 and a second axis 118, which are shown in the Figure to the side of the touch-sensitive electronic display 102, for clarity. In examples such as FIG. 2b, the first axis 116 is substantially vertical, e.g. vertical, and the second axis 118 is substantially horizontal, e.g. horizontal, although in other examples, axes of the touch-sensitive electronic display may be at different orientations. Typically, however, the first axis 116 is orthogonal to the second axis 118.
[0027] From the touch input 108, a gesture type is detected. In FIG. 2b, the gesture type is a first gesture type, with motion of the touch input 106 along the first axis 116. In examples such as FIG. 2b, a touch input 108 is considered or detected to be a first gesture type if a larger component of motion of the touch input is along the first axis 116 than along the second axis 118. For example, the touch input may be angled with respect to the first axis 116, with a first component of motion along the first axis 116 and a second component of motion along the second axis 118. In these cases, where the touch input has components of motion along each of the first and second axes 116, 118, the touch input may be detected to correspond to the first gesture type where the first component of motion is larger, for example with a greater magnitude, than the second component of motion. In other examples, though, the touch input may not have a component of motion along both axes; for example, the touch input may be solely along the first axis 116 or solely along the second axis 118. In such examples, touch input solely along the first axis 116 may be detected to be the first gesture type. For example, the first gesture type may be a swipe or sliding movement of the touch input which is substantially along the first axis 116, for example with a greater magnitude along the first axis 116 than along the second axis 118.
[0028] In response to detecting that the touch input 108 corresponds with the first gesture type, a display characteristic of the image including the star 106 is adjusted during the displaying of the image, so that the properties of the image change in real time. In the example of FIGS. 2a and 2b, the display characteristic of the star 106 of FIG. 2a is adjusted based on detecting that the touch input 108 is of the first gesture type so as to display an adjusted star 106’ in FIG. 2b. The adjusted star 106’ is outlined with a solid line in FIG. 2b rather than a dashed line in FIG. 2a to indicate schematically that the display characteristic of the star has changed between FIG. 2a and 2b.
[0029] The display characteristic of the image may be adjusted in dependence on at least one detected characteristic of the motion of the touch input. The at least one detected characteristic of the motion of the touch input may include at least one of a length of the touch input or a direction of the touch input. For example, the length of the touch input may be used to determine the magnitude or amount by which the display characteristic is to be altered and the direction of the touch input may be used to determine the direction in which the display characteristic is to be altered, e.g. whether the display characteristic is to be increased or decreased. In other examples, though, the at least one detected characteristic of the motion of the touch input may include other features or properties of the motion of the touch input such as the number of points of contact of the touch input with the touch-sensitive electronic display, e.g. corresponding to the number of fingers or implements touching the display, a degree of rotation of the touch input, an orientation of the touch input, a velocity or acceleration of the touch input, or a pressure applied to the touch-sensitive electronic display by the touch input. [0030] Display characteristics that may be adjusted based on the touch input of the first gesture type may include any visible properties, features or attributes of the image. In examples, the display characteristic that may be adjusted based on a detected first gesture type includes at least one of a brightness of the image, a gamma correction strength of a gamma correction applied to the image, a saturation of the image, or a tone mapping strength of a tone mapping applied to the image.
[0031] A brightness of a pixel of an image is for example an arithmetic mean of the red, green and blue colour coordinates or colour channel intensity values in the red, green and blue (RGB) colour space for that pixel. Alternatively, in the HSV (hue, saturation, value; sometimes referred to as hue, saturation, brightness, HSB) colour space, the brightness of a pixel may be taken as the value, size or magnitude of the value or brightness coordinate. The brightness of the image may be considered generally as the relative lightness of the image and typically depends on the brightness of the pixels of the image. For example, the image brightness be an average or mean of the pixel brightnesses.
[0032] Adjusting the brightness for example can darken or brighten the image as whole, for example by decreasing or increasing the brightness of image pixels. For example, altering the brightness may involve shifting the brightness for each of the pixels of the image by the same amount. The direction of the shift, for example whether the image is darkened or brightened, may be controlled based on the direction of the touch input. In some examples, the image may be darkened by a downward movement of the touch input, for example from an upper to a lower location on the second are of the touch-sensitive electronic display, and lightened or brightened by an upward movement of the touch input. The amount by which the image is darkened or brightened may depend on the length of the movement of the touch input, with long movements, with a larger distance between the location at which the touch input first contacts the second area of the touch-sensitive electronic display and the location at which the movement ceases or halts, corresponding with a larger magnitude change in brightness. For example, the movement may be considered to cease or halt at the point or location on the touchsensitive electronic display where the touch input ceases to contact the touch-sensitive electronic display, or when the touch input remains stationary at a particular point or location on the touch-sensitive electronic display for a time period longer than a predetermined time period.
[0033] Gamma correction is typically a non-linear operation that may be defined using the following power-law expression:
(1) Vout = AV?n where Vout is an output value, A is a constant, Vm is an input value and γ is a gamma value. The input and output values are for example luminance or tristimulus values of pixels of the image. [0034] The detected at least one characteristic of the touch input may be used to control or alter the γ-value in Equation 1. For example, a particular direction of motion, such as an upward motion, of the touch input may correspond with an increase in the γ-value and a different direction of motion of the touch input, such as a downward motion, may correspond with a decrease in the γ-value. The γ-value may be altered by an amount or magnitude corresponding to the length of the motion of the touch input.
[0035] Saturation is for example one of the coordinates in the HSL (hue, saturation, lightness) and HSV or HSB colour spaces. The saturation may be understood intuitively as the relative bandwidth of a colour of a pixel in wavelength space. For example, a highly saturated colour may correspond to a colour with a narrow bandwidth, which is highly peaked in wavelength space. In contrast, a colour with a low saturation may have a large bandwidth, which may appear more “washed out”.
[0036] The saturation may be adjusted in dependence on the at least one characteristic of the motion of the touch input similarly to adjustment of the brightness or gamma correction strength, with a direction of the motion indicating or determining whether the saturation is to be increased or decreased and a length of the motion determining the amount or magnitude by which the saturation is to be altered.
[0037] Tone mapping typically refers to a process by which a dynamic range of an image is adjusted to enhance the quality of an image, where the dynamic range is generally understood to refer to the ratio between intensities of the brightest and darkest parts of an image or scene. For example, tone mapping can be used to enhance detail or contrast in the image, while still ensuring the image appears relatively “natural” to an observer. To do this, the tone mapping may be asymmetric in the brightness domain, such that a greater amount of tone mapping is applied to dark regions of the image than relatively bright regions, for example by altering an intensity value of relatively dark portions of the image to a greater extent than relatively bright portions. This mimics the behavior of the human eye, which has a relatively high dynamic range, and which is capable of seeing detail in even relatively dark regions of an image. Tone mapping applied to the image may therefore be spatially-variant, for example spatially nonuniform, with a greater amount of tone mapping applied to certain spatial regions of the image compared with other spatial regions, although spatially-invariant or uniform tone mapping is also possible. The tone mapping may be continuous and smoothly-varying in both spatial and luminance dimensions. The intensity range of pixels corresponding with detail to preserve in the image in dark and/or light areas may therefore be increased and the intensity range of other areas of the image may be decreased. The amount of tone mapping may correspond with the extent or magnitude of alteration of the intensity value of pixels in the image by the tone mapping, for example to enhance the image detail as explained above.
[0038] The dynamic range may be compressed or expanded by the tone mapping. Dynamic range compression can be used to reduce the dynamic range of the image to match or be closer to a dynamic range displayable by the touch-sensitive electronic display, for example. Images captured using a camera can have a high dynamic range of for example up to around 4000:1. In contrast, the dynamic range of typical display devices may be much lower than this, for example around 50:1. Dynamic range compression can therefore be applied to reduce a dynamic range of image data representing a high dynamic range image to match a lower dynamic range of the touch-sensitive electronic display for displaying the image.
[0039] Conversely, dynamic range expansion can be used to increase a dynamic range of the image, for example in cases where the dynamic range displayable by the touch-sensitive electronic display is larger than a dynamic range of the image data representing the image to be displayed.
[0040] A suitable tone mapping algorithm is the Orthogonal Retina-Morphic Image Transform (ORMIT) algorithm, although various other, different, tone mapping algorithms are also suitable.
[0041] In examples, a tone mapping strength of a tone mapping to applied to the image may be adjusted in dependence on the at least one characteristic of the motion of the touch input. For example, the tone mapping strength may be increased or decreased depending on a direction of the motion and by an amount or magnitude depending on a length of the motion of the touch input, similarly to adjustment of the brightness, gamma correction strength and saturation as described above. The tone mapping strength may for example take a value between 0 and 1, which may represent an amount of spatially-variant tone mapping, such as an amount or magnitude by which each pixel’s intensity or brightness is altered by the tone mapping. The tone mapping strength itself may be different for different pixels in the image, in order to achieve an amount of tone mapping which varies across the image. For example, the tone mapping strength may vary in accordance with pixel intensity so that the tone mapping is stronger (for example with a higher strength) in darker parts of the image with low pixel intensity values, and is weaker in brighter parts of the image. This allows stronger enhancement of the shadows without affecting the bright regions. In such cases, the tone mapping strength may not be uniformly changed or altered by the touch input. For example, the tone mapping strength may be adjusted using a formula, such that the tone mapping strength is adjusted more or less depending on pixel intensity values. In examples in which the tone mapping uses the ORMIT algorithm, the tone mapping strength is the ORMIT a parameter.
[0042] In examples, the first gesture type may adjust solely one of the display characteristics of the image, for example solely the tone mapping strength, solely the brightness, solely the gamma correction strength or solely the saturation of the image. In such examples, the method may additionally include receiving a further touch input to switch between display characteristic adjustment modes. For example, the user may be able to supply a particular touch input corresponding with a particular gesture of the plurality of detectable gesture types in order to switch between a mode in which the first gesture type adjusts the tone mapping strength to a different mode in which the first gesture type adjusts the brightness. The particular touch input may involve touching a particular region of the second area, such as a region corresponding with a given icon. For example, the user may be able to click on an icon corresponding with a “tone mapping adjustment” mode, or an icon corresponding with a “brightness adjustment” mode or other icons corresponding to other display characteristics in order to switch between these modes, to allow each of various display characteristics to be adjusted in turn.
[0043] In the example of FIGS. 2a and 2b, the display characteristic of the image is adjusted by a touch input on the image itself, which is in the first area 104 (corresponding to the second area). However, in other examples, the first gesture type may correspond with a touch input on a region of the second area outside the first area, for example a border area of the second area. In such cases, the touch input may be a touch input on a particular icon in the border area. For example, the border area may include respective images of a scale and a slider, which is moveable on screen relative to the scale. A position of the slider on the scale may be altered in dependence on the touch input to control the display characteristic of the image.
[0044] An output display characteristic based on the adjusting the display characteristic of the image may be stored and a subsequent image may be displayed with a subsequent image display characteristic based on the output display characteristic. In this way, the previouslyobtained settings, which typically correspond with a user preference, may be saved and re-used to display future images. This can allow subsequent images to be displayed with the same or a corresponding display characteristic as the image, for example allowing a direct comparison between two different images with the same display characteristic. However, as the display characteristic of the subsequent image may also be adjusted using a touch input corresponding to the first gesture type, as described above for the image with respect to FIGS. 2a and 2b, this provides additional flexibility for the viewing of the subsequent image.
[0045] The output display characteristic may be stored in an image file including image data representing the image. For example, the output display characteristic may be stored as metadata associated with the image data. For example, where the image file is in the form of a JPEG, the output display characteristic may be stored in the Exchangeable Image File Format (EXIF). The EXIF data may be embedded within the image file itself, for example within the JPEG file. Typically, EXIF data is stored in a header of the JPEG. For example, EXIF data may be stored in one of the utility Application Segments of the JPEG, generally the APP1 (segment marker OxFFEl), although other segments may be used.
[0046] By storing the output display characteristic in the image file including the image data representing the image, the method in examples allows further images to be generated based on the image data and the output display characteristic, from data contained within the image file itself. This allows the display characteristic, and hence the visual impression, of the image to be reproduced at different times, for example in different viewing conditions, or by different computing devices, based on the image file.
[0047] FIGS. 3 a and 3b illustrate schematically an example of the second gesture type on the smartphone 100 shown in FIGS. 2a and 2b. Features of FIGS. 3a and 3b that are the same as corresponding features of FIGS. 2a and 2b are labelled with the same reference numerals; corresponding descriptions should be taken to apply. Features of FIGS. 3a and 3b that are similar to but not the same as corresponding features of FIGS. 2a and 2b are labelled with the same reference numerals but incremented by 100; corresponding descriptions should nevertheless be taken to apply.
[0048] FIG. 3 a shows a touch input 208 on the second area of the touch-sensitive electronic display 102 (which corresponds with the first area 104 in this example). The touch input 208 is moved in FIG. 3b from a first location 210 on the second area to a second location on the second area along a path 214. In the example of FIG. 3b, the path 214 is substantially horizontal (for example within 5 or 10 degrees of the horizontal).
[0049] A gesture type is detected from the touch input 208, which in FIG. 3b is a second gesture type. In examples such as FIG. 3b, a touch input 208 is considered or detected to be a second gesture type if a larger component of motion of the touch input 208 is along the second axis 118 than along the first axis 116. In FIG. 3b, the path 214 is along the second axis 118. Therefore the motion of the touch input 208 is entirely or wholly along the second axis 118, and the component of motion of the touch input 208 along the first axis 116 is zero. In further examples, though, the touch input 208 may have a respective non-zero component along each of the first and second axes 116, 118, with a larger magnitude component along the second axis 118 than along the first axis 116.
[0050] Thus, in examples, the first gesture type differs from the second gesture type in that a touch input corresponding to the first gesture type has a larger component of motion along a different axis than the second gesture type. In this example, a touch input with a larger component of motion in a vertical direction (along the first axis 116) is associated with the first gesture type and a touch input with a larger component of motion in a horizontal direction (along the second axis 118) is associated with the second gesture type. Touch inputs corresponding respectively with the first gesture type and the second gesture type may be otherwise identical. Alternatively, these touch inputs may differ from each other in one or more other respects.
[0051] In response to detecting that the touch input 208 of FIG. 3b corresponds with the second gesture type, the touch-sensitive electronic display ceases to display the image and instead displays a further image, for example a different image than the originally displayed image. In this way, touch input 208 corresponding to the second gesture type may be used to switch between displaying various different images on the touch-sensitive electronic display. [0052] In examples such as that of FIG. 3b, the ceasing to display the image on the touchsensitive electronic display includes moving the image off the touch-sensitive electronic display along the other of the first and second axes of the touch-sensitive electronic display than the one of the first and second axes with a larger component of motion of a touch input of the first gesture type. This illustrated in FIG. 3b, which shows the image including the star 106 being moved off screen along the second axis 118, whereas the first gesture type has a larger component of motion of the touch input along the first axis 116. FIG. 3b shows a snapshot of a position of the image partway through the moving of the image. Subsequently, the image will continue moving leftwards, along the second axis 118, until the touch-sensitive electronic display no longer displays the image and solely displays the further image 120. In examples, the image may be moved off screen smoothly along the second axis 118, although in other examples the movement of the image off screen may be more sudden or jerky.
[0053] FIGS. 4a and 4b illustrate schematically an example of a third gesture type on the smartphone 100 shown in FIGS. 2a and 2b. Features of FIGS. 4a and 4b that are the same as corresponding features of FIGS. 2a and 2b are labelled with the same reference numerals; corresponding descriptions should be taken to apply.
[0054] In examples, such as that of FIGS. 4a and 4b, the smartphone 100 has at least a first display mode and a second display mode. The first display mode is, for example, a nonzoomed-in display mode, such as a mode illustrated in FIGS. 2 and 3, and the second display mode is, for example, a zoomed-in display mode, illustrated in FIGS. 4a and 4b. This can be seen by comparing the size of the star 106 in FIG. 2a with the star 106 of FIG. 4a; the star 106 is larger in FIG. 4a because the image is zoomed-in in FIG. 4a. A zoomed-in display mode is thus, for example, a display mode in which content to be displayed is scaled, for example increased in size.
[0055] In such examples, the first gesture type and the second gesture type may be detectable in the first display mode for the image and a third gesture type may be detectable in the second display mode for the image. If the detected gesture type is the third gesture type, the display characteristic of the image may be adjusted, during the displaying the image, in dependence on at least one detected characteristic of the motion of the touch input, for example similarly to the adjustment of the display characteristic upon detection of the first gesture type.
[0056] Features or properties of the touch input corresponding respectively to the first gesture type and the third gesture type may be the same, except that the first gesture type is detectable in the first display mode and the third gesture type is detectable in the second display mode. FIGS. 4a and 4b show such an example. As can be seen by comparing FIGS. 2b and 4b, the motion of the touch input 108 in these Figures is the same, and has the same effect of adjusting the display characteristic of the image.
[0057] However, in other examples in which the plurality of detectable gesture types include a fourth gesture type which is detectable during the zoomed-in display mode, a given touch input, if received during the non-zoomed-in display mode, is detected as the first gesture type and, if received during the zoomed-in display mode, is detected as the fourth gesture type. In these examples, detecting the third gesture type may include detecting an additional touch input compared to detecting the first gesture type.
[0058] FIGS. 5a and 5b illustrate schematically an example of the fourth gesture type. Features of FIGS. 5 a and 5b that are the same as corresponding features of FIGS. 2a and 2b are labelled with the same reference numerals; corresponding descriptions should be taken to apply. Features of FIGS. 5a and 5b that are similar to but not the same as corresponding features of FIGS. 2a and 2b are labelled with the same reference numerals but incremented by 100; corresponding descriptions should nevertheless be taken to apply.
[0059] The smartphone 200 of FIG. 5a is similar to the smartphone of FIGS. 2 to 4, however, in contrast to the smartphone 100 of FIGS. 2 to 4, a given touch input is detected as the first gesture type in the non-zoomed-in mode and the fourth gesture type in the zoomed-in mode, whereas the given touch input is detected as the first gesture type in the non-zoomed-in mode and the third gesture type in the zoomed-in mode in the smartphone 100 of FIGS. 2 to 4. [0060] FIG. 5a shows an image in a zoomed-in mode, similarly to FIG. 4a. As can be seen in FIG. 5b, when the touch input 208 is moved from the first location 110 to a further location 212 along a path 214 in the same direction as the path 114 of FIG. 4b, the display characteristic of the image is not adjusted. Instead, the image is scrolled, for example moved upwards and partly off the touch-sensitive electronic display. Scrolling may for example refer to a sliding movement of the image across the touch-sensitive electronic display. The touch input 208 of FIG. 5b corresponds to the first gesture type as it has a larger component of motion along the first axis 116. However, as the touch input 208 is received during the zoomed-in display mode, it is detected as a fourth gesture type. In this example, the fourth gesture type corresponds with a command to scroll or alter a position of the image or other elements displayed on the touchsensitive electronic display. In other examples, though, the fourth gesture type may correspond with other commands.
[0061] In order to alter the display characteristic of the image in the zoomed-in mode, an additional touch input must be applied in the example of FIGS. 5 and 6. This is illustrated in FIGS. 6a and 6b, which shows the smartphone 200 of FIGS. 5a and 5b. Features of FIGS. 6a and 6b that are the same as corresponding features of FIGS. 2a and 2b are labelled with the same reference numerals; corresponding descriptions should be taken to apply. Features of FIGS. 6a and 6b that are similar to but not the same as corresponding features of FIGS. 2a and 2b are labelled with the same reference numerals but incremented by 100; corresponding descriptions should nevertheless be taken to apply.
[0062] FIG. 6a shows the image being displayed in the zoomed-in mode. A touch input 108 is received on the second area. FIG. 6b illustrates the touch input 108 being moved from the first location 110 to the second location 112 along the path 114. The touch input 108 and the movement of the touch input 108 is the same as the touch input 108 of FIGS. 4a and 4b. However, in the example of FIGS. 6a and 6b, an additional touch input 122 is also received on the second area in addition to the touch input 108. The touch input 108 and the additional touch input 122 are detected as the third gesture type. The display characteristic of the image is adjusted, during the display the image, in dependence on at least one detected characteristic of the motion of the touch input, as described with reference to FIGS. 4a and 4b. In examples such as this, detecting a touch input as a third gesture type may include detecting a larger component of motion of the touch input 108 along the one of the first and second axes of the touch-sensitive electronic display, for example along the same axis along which the touch input has a larger component of motion for the first gesture type in the non-zoomed-in mode.
[0063] Thus, in examples such as that of FIGS. 6a and 6b, a multi-touch input is used to differentiate the third gesture type from the fourth gesture type, for example by associating the third gesture type with an input including a combination of a touch input and an additional touch input. This provides additional flexibility and options for image manipulation for the user. For example, a predetermined sub-area of the touch-sensitive electronic display may be allocated for receiving the additional touch input. In such cases, the detecting the additional touch input may include detecting the additional touch input on the predetermined sub-area. The predetermined sub-area may be part of the first area or part of the second area or outside one or both of the first and second areas. Suitable locations for the predetermined sub-area include a corner of the touch-sensitive electronic display such as a bottom comer. These locations may be used in cases in which the touch-sensitive electronic display is intended to be used in a landscape orientation or in other orientations of the touch-sensitive electronic display. [0064] The size of the predetermined sub-area may be selected based on characteristics of an intended user of the touch-sensitive electronic display. For example, the predetermined subarea may correspond with or approximately equal the size of an average human thumb, for example with an area which is within plus or minus 10%, plus or minus 20%, plus or minus 30%, plus or minus 40%, or plus or minus 50% of the surface area of an average portion of a human thumb that would come into contact with the touch-sensitive electronic display when a human touches the display.
[0065] An additional touch input corresponding with the third gesture type in an example of a multi-touch input may therefore involve holding or touching a predetermined sub-area of the touch-sensitive electronic display located in the bottom left comer of the touch-sensitive electronic display with the thumb on the same hand as used for holding the touch-sensitive electronic display (which is typically the left hand, for example where the touch-sensitive electronic display is part of a smartphone). The right hand can then be used to apply the touch input, for example to adjust the display characteristic of the image or to switch between images displayed on the touch-sensitive electronic display. Either one or more fingers or the thumb of the right hand can be used for applying the touch input.
[0066] In a further example of a multi-touch input, an additional touch input corresponding with the third gesture type may be input by the left thumb as explained above. However, the touch-sensitive electronic display may be held by the right hand and the thumb of the right hand may be used for applying the touch input.
[0067] In yet further examples in which there is a multi-touch input, the actions of the left and right hands in the examples above may be reversed.
[0068] A multi-touch input may also be used in other modes, such as a non-zoomed-in mode. FIG. 7 shows such an example. Features of FIG. 7 the same as those of FIGS. 2a and 2b are labelled with the same reference numerals; corresponding descriptions should be taken to apply. The smartphone 300 of FIG. 7 is similar to the smartphone 100 of FIGS. 2a and 2b except that it is configured to detect a different first gesture type than the smartphone 100 of FIGS. 2a and 2b. In such examples, detecting the first gesture type may include detecting a plurality of touch inputs comprising the touch input. For example, FIG. 7 shows the touch input 108 and an additional touch input 122; the touch input 108 and the additional touch input 122 are the same as the respective touch input 108 and the additional touch input 122 described with reference to FIGS. 6a and 6b, except that they are in the zoomed-out mode in FIG. 7. The touch input 108 is moved along the path 114 to adjust the display characteristic of the image, similarly to the example of FIGS. 6a and 6b.
[0069] An overview of examples of internal components for the computing device, such as the smartphones 100, 200, 300 of FIGS. 2 to 7, is provided below with reference to FIG. 8. [0070] The computing device of FIG. 8 includes a network interface 124. The network interface 124 allows image files to be retrieved from a server device 126. The network interface 124 of the computing device may include software and/or hardware components, such as a virtual network interface, an Ethernet port, a software driver and/or communications stack interacting with network hardware.
[0071] Storage 128 of the computing device in the example of FIG. 8 stores data 130 received at the network interface 124. The data 130 in this example includes an image file including image data representing an image for display. The storage 128 may include at least one of volatile memory, such as a Random Access Memory (RAM) and non-volatile memory, such as Read Only Memory (ROM) or a solid state drive (SSD) such as Flash memory. The storage 128 in examples may include further storage devices, for example magnetic, optical or tape media, compact disc (CD), digital versatile disc (DVD) or other data storage media. The storage 128 may be removable or non-removable from the computing device.
[0072] At least one processor 132 is communicatively coupled to the storage 128 in the computing device of FIG. 8. The at least one processor 132 in the example of FIG. 8 may be a microprocessor, a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any suitable combination thereof designed to perform the functions described herein. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The at least one processor 132 may also be or include at least one graphics processing unit (GPU) such as an NVIDIA® GeForce® GTX 980, available from NVIDIA®, 2701 San Tomas Expressway, Santa Clara, CA 95050, USA, although other processors are possible. For example, in one case the computing device may include a thin terminal with graphics processing capabilities; in other cases the computing device may include a computing device comprising at least one central processing unit (CPU) and at least one graphics processing unit.
[0073] The storage 128 in the example of FIG. 8 includes an image displaying module 134 configured to display the image on the first area of the touch-sensitive electronic display 102, and a gesture type detection module 136 configured to detect, from a touch input on the second area of the touch-sensitive electronic display, a gesture type which is one of a plurality of detectable gesture types. As described above, the plurality of detectable gesture types include a first gesture type and a second gesture type, detecting the first gesture type including detecting a larger component of motion of the touch input along one of the first and second axes of the touch-sensitive electronic display than along the other of the first and second axes of the touchsensitive electronic display and detecting the second gesture type including detecting a larger component of motion of the touch input along the other of the first and second axes of the touch-sensitive electronic display than along the one of the first and second axes of the touchsensitive electronic display.
[0074] The storage 128 in this example further includes a display characteristic adjustment module 138 configured to, if the detected gesture type is the first gesture type, adjust, during displaying the image on the first area of the electronic display, a display characteristic of the image in dependence on at least one detected characteristic of the motion of the touch input. The storage 128 also includes an image switching module 140 configured to, if the detected gesture type is the second gesture type, cease displaying the image on the touch-sensitive electronic display 102 and display a further image on the touch-sensitive electronic display 102.
[0075] One or more of the image displaying module 134, the gesture type detection module 136, the display characteristic adjustment module 138, or the image switching module 140 may be implemented as hardware. Alternatively, one or more of these modules may be implemented as software, or as a combination of hardware and software. Where at least one of these modules is at least partly implemented as software, the storage 128 may include computer program instructions configured to, when processed by the at least one processor 132, implement the respective module. The computer program instructions may be stored in an accessible nontransitory computer-readable medium and loaded into memory, for example the storage 128, to implement the respective module. In examples, the storage 128 and the computer program instructions are configured to, with a graphics processing unit of the storage 128, implement at least one of the modules. For example, use of the graphics processing unit may allow for parallel processing of multiple operations for adjustment of the display characteristic of the image, improving the speed at which the display characteristic is altered.
[0076] The components of the computing device in the example of FIG. 8 are interconnected using a systems bus 142. This allows data to be transferred between the various components. For example, an image file including data representing the image may be stored in the storage 128 and subsequently transmitted via the systems bus 142 from the storage 128 to a display interface 144 for transfer to the touch-sensitive electronic display 102 for display. The display interface 144 may include a display port and/or an internal electronics interface, e.g. where the touch-sensitive electronic display 102 is part of the computing device such as a display screen of a smartphone. Therefore, when instructed by the at least one processor 132 via the display interface 144, the touch-sensitive electronic display 102 will display an image based on the image data.
[0077] The touch-sensitive electronic display 102 is for example a conventional touchscreen. For example, the touch-sensitive electronic display 102 may be or include a resistive touchscreen panel or a capacitive touchscreen panel configured to detect one or more touches of an input or proximity of the input that occur at various locations on the panel to produce signals representative of a location of the input for each of the touches.
[0078] The above examples are to be understood as illustrative examples. Further examples are envisaged. In the examples of FIGS. 2 to 7, touch input with a larger component in the vertical direction is considered to correspond to the first gesture type. However, in other examples, touch input with a larger component in the vertical direction may correspond with the second gesture type and the first gesture type may correspond with a touch input with a larger component in the horizontal direction. In yet further directions, the first and second axes may not be horizontal and vertical. For example, the first and second axes may be rotated with respect to vertical and horizontal axes.
[0079] In the example of FIG. 8, an image file including image data representing an image is received from a server device. However, in other examples, the image file may be stored on storage of the computing device. For example, the image may have been captured by an image capture device such as a camera of or coupled to the computing device or may have been downloaded or transferred to the computing device from other storage than storage of a server device, and stored as the image file on storage of the computing device.
[0080] Examples described above refer to adjusting the tone mapping strength by adjusting the ORMIT a parameter, which is obtained based on the at least one characteristic of the motion of the touch input. However, in other examples, the tone mapping strength may be derived by further processing of a motion characteristic value obtained based on the at least one characteristic of the motion of the touch input. The motion characteristic value is for example a value between 0 and 1, which depends on a length and/or direction of motion of the touch input. For example, a tone mapping strength applied to the image may be derived by combining the motion characteristic value with a further tone mapping strength parameter to generate a combined tone mapping strength. For example, whereas the motion characteristic value depends on the touch input, which e.g. corresponds with a user preference, the further tone mapping strength parameter may be depend on a different parameter or property. For example, the further tone mapping strength parameter may depend on a predetermined value; a display property of a display device configured to display an output image based on the output image data; an ambient light level; or an application property of an application for use in displaying the output image based on the output image data.
[0081] The pre-determined value may be, for example, a value that a content creator or image supplier has determined is an optimal or desired tone mapping to obtain a desired output image for viewing. For example, the creator or supplier of the image may have ascertained that the image quality of the image is optimal in particular viewing conditions with a particular reference tone mapping strength parameter used as an input to the tone mapping. This may be determined for example by adjusting the tone mapping strength to adjust the strength of the tone mapping applied to the image, analyzing the display quality of the output image after the application of the tone mapping, for example by eye or electronically, and storing a reference tone mapping strength corresponding with the optimal display quality as part of the input image file as further tone mapping strength data representing the further tone mapping strength parameter. The viewing conditions the further tone mapping strength parameter is optimized for may be relatively dark viewing conditions. In such cases, the further tone mapping strength parameter may be zero, for example such that the tone mapping does not alter the input image data, so that the output image and the input image are the same. In other cases, the reference tone mapping strength parameter may be non-zero. The further tone mapping strength parameter may depend on the content of the image. For example, where the image includes human skin, the further tone mapping strength parameter may be non-zero as human skin has a limited brightness, and therefore may be enhanced by tone mapping, for example to amplify detail in the skin.
[0082] The display property of the display device, such as one of the touch-sensitive electronic displays 100, 200, 300 described above, may be any property, characteristic or attribute that may affect the display quality of the image. For example, the display property may be a luminance of the display device, e.g. a maximum brightness or intensity of light emitted from a backlight for illuminating pixels of the display device or a maximum pixel luminance, or a display device type. Typically, a different amount of tone mapping is required for different types of display device, for example liquid crystal display devices (LCDs) compared with organic light emitting diode display devices (OLEDs), to achieve a given display quality of an image, for example with a given amount of detail visible in dark regions of the image.
[0083] Where the further tone mapping strength parameter depends on the ambient light level, the ambient light level can be measured for example by an ambient light sensor. The ambient light sensor may be coupled to or integral with the computing device. Such an ambient light sensor may include one or more photodetectors; the use of multiple photodetectors may increase the reliability of the measurement of diffuse ambient light.
[0084] As explained above, in some cases the further tone mapping strength parameter may depend on an application property of an application for use in displaying the image. An application property is for example a property specified by the developer, manufacturer or designer of the application that is intended for use in displaying the image, for example a browser or other application capable of displaying images. The application property may for example specify that images should be displayed with a particular tone mapping, for example where it is desired to give images displayed using the application a particular “look”. For example, the application developers may wish to display hyper-realistic images, with a high dynamic range, or murky images, with little detail visible, with a low dynamic range.
[0085] The motion characteristic value and the further tone mapping strength parameter may be combined in various ways, as the skilled person will appreciate. For example, the motion characteristic value may be or correspond with a particular, e.g. a pre-determined, gain G. The gain G may be expressed as:
(2) G = where D is the dynamic range of the image data before tone mapping and DTM is a predetermined output dynamic range to be obtained after the tone mapping.
[0086] The input value a to the tone mapping may be derived from the gain G as follows:
(3) a =
G-l where G is the gain defined in (2), and Gmax is the maximum gain achievable with a maximum tone mapping strength.
[0087] Where the motion characteristic value and the further tone mapping strength parameter are combined, both the motion characteristic value and the further tone mapping strength parameter may correspond with different respective gain values. In such cases, the gain associated with the motion characteristic value, denoted as a first gain Gi, and the gain associated with the further tone mapping strength parameter, denoted as a second gain G2, may be multiplied together as follows to obtain a combined gain denoted as Gc:
(3) Gc — G1 * G2 [0088] Similarly, the further tone mapping strength parameter may be combined with more than one set of further tone mapping strength parameters by multiplying the first gain G2 with the respective gain corresponding with each of set of further tone mapping strength parameters. [0089] The combined strength parameter ac may then be calculated as:
[0090] As the skilled person will appreciate, other methods or algorithms may be used to combine the motion characteristic value and the further tone mapping strength parameter. For example where the motion characteristic value equals a tone mapping strength parameter ai and the further tone mapping strength parameter equals a different tone mapping strength parameter «2, the combined strength parameter ac may be obtained by multiplying ai and «2. [0091] The motion characteristic value and the further tone mapping strength parameter may be combined using software, hardware or a combination of software and hardware.
[0092] In other examples, a method sometimes referred to as alpha-blending may be used to tone map the image. As the skilled person will appreciate, alpha-blending typically involves overlaying or combining of two versions of the same image: a first version of the image with no tone mapping applied (or a lower or different amount of tone mapping than the second version of the image) and one with non-zero tone mapping applied, which may be with maximal tone mapping applied, for example. A relative contribution of the first and second version of the image to the image as displayed on the touch-sensitive electronic display may depend on the at least one detected characteristic of the motion of the touch input (e.g. the motion characteristic value referred to above).
[0093] In such examples, the tone mapping strength may be a combined tone mapping strength parameter (e.g. obtained from a combination of the motion characteristic value and the further tone mapping strength parameter as described above), or the motion characteristic value itself. Where the tone mapping strength is the combined tone mapping strength parameter, ac, the pixel intensity values of pixels of the image may be modified as:
(5) lout = Λ * (1 - ac) + I2* ac where I out is the output intensity value for the output image data representing the image as displayed on the touch-sensitive electronic display, h is the pixel intensity value from the first version of the image and h is the pixel intensity value from the second version of the image.
[0094] Other blending schemes are also possible. For example, the pixel intensity values may instead be modified as:
(6) Iout = Vzi2 * (1 - ac) + /2 * ac where lout, Ii, h and ac are as previously defined.
[0095] In examples in which the touch input is detected to correspond to the first gesture type, this may be taken to be an indication that the tone mapping strength (or another display characteristic) is to vary. In such examples, first image data representing a first version of the image with a first amount of tone mapping, which may be spatially-variant, may be stored in a first frame buffer, and second image data representing a second version of the image with a second amount of tone mapping, which may also be spatially-variant, may be stored in a second frame buffer. The first amount of spatially-variant tone mapping is, for example, zero and the second amount of spatially-variant tone mapping is, for example, non-zero, and may be a maximal amount of tone mapping.
[0096] By storing the first image data in a first frame buffer and the second image data in a second frame buffer, various different amounts of alpha-blending can readily be applied to the first image data and the second image data. This can allow for rapid changing of the tone mapping applied to the image, for example based on the touch input.
[0097] For example, a display characteristic adjustment module of the computing device may receive, for each of at least one additional frame for display by the touch-sensitive electronic display, a respective additional input value determined in dependence on at least one characteristic of the motion of the touch input. For example, each additional input value may correspond with a change in the motion of the touch input within a predetermined time period, e.g. corresponding to one frame, compared with a previous time period. In this way, the user may vary the tone mapping applied in each of the at least one additional frame based on the touch input.
[0098] The display characteristic adjustment module may further be arranged to generate, for each of the at least one additional frame, an additional frame buffer storing additional output image data representing an additional output image based on the first image data and the second image data, a relative contribution of the first image data and the second image data to the additional image data depending on the additional input value for the respective frame.
[0099] In such examples, the image may therefore be displayed in a first frame and, in each of the at least one additional frame, the respective additional output image may be displayed. In such cases, the at least one additional frame are, for example, subsequent to the first frame. [00100] This method for example allows each of the at least additional frame to be associated with a different amount of alpha-blending of the first version of the image and the second version of the image, allowing the tone mapping of the image to be rapidly varied, as the image is displayed. For example, there is no need to re-retrieve the image data nor to recalculate or redo the tone mapping for each frame. Instead, it is merely necessary to recalculate the pixel intensities for the image to be displayed based on the motion characteristic value and/or the further tone mapping strength parameter, for example by changing the relative contribution of the first version of the image and the second version of the image to the additional output image. This can be performed rapidly, for example by a graphics processing unit of the computing device.
[00101] In yet further examples, the tone mapping controlled in dependence on the at least one characteristic of the motion of the touch input may be a further tone mapping applied to an image that has already been tone mapped.
[00102] It is to be understood that any feature described in relation to any one example may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the examples, or any combination of any other of the examples. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the accompanying claims.

Claims (15)

1. A method comprising:
displaying an image on a first area of a touch-sensitive electronic display, the touchsensitive electronic display comprising a first axis and a second axis which is orthogonal to the first axis;
receiving touch input on a second area of the touch-sensitive electronic display, the second area comprising the first area; and detecting, from the touch input, a gesture type which is one of a plurality of detectable gesture types, wherein the plurality of detectable gesture types comprise a first gesture type and a second gesture type, wherein detecting the first gesture type comprises detecting a larger component of motion of the touch input along one of the first and second axes of the touchsensitive electronic display than along the other of the first and second axes of the touchsensitive electronic display and detecting the second gesture type comprises detecting a larger component of motion of the touch input along the other of the first and second axes of the touch-sensitive electronic display than along the one of the first and second axes of the touchsensitive electronic display, and wherein:
if the detected gesture type is the first gesture type, the method comprises:
adjusting, during the displaying the image, a display characteristic of the image in dependence on at least one detected characteristic of the motion of the touch input; and if the detected gesture type is the second gesture type, the method comprises:
ceasing to display the image on the touch-sensitive electronic display; and displaying a further image on the touch-sensitive electronic display.
2. The method according to claim 1, wherein the plurality of detectable gesture types comprise a third gesture type, wherein the first gesture type and the second gesture type are detectable in a first display mode for the image and wherein the third gesture type is detectable in a second display mode for the image, and wherein:
if the detected gesture type is the third gesture type, the method comprises adjusting, during the displaying the image, the display characteristic of the image in dependence on at least one detected characteristic of the motion of the touch input.
3. The method according to claim 2, wherein the first mode is a non-zoomed-in display mode and the second mode is a zoomed-in display mode.
4. The method according to claim 3, wherein the plurality of detectable gesture types comprise a fourth gesture type which is detectable during the zoomed-in display mode, and wherein:
a given touch input, if received during the non-zoomed-in display mode, is detected as the first gesture type and, if received during the zoomed-in display mode, is detected as the fourth gesture type; and detecting the third gesture type comprises detecting an additional touch input compared to detecting the first gesture type.
5. The method according to any one of claims 2 to 4, wherein detecting the third gesture type comprises detecting a larger component of motion of the touch input along the one of the first and second axes of the touch-sensitive electronic display than along the other of the first and second axes of the touch-sensitive electronic display.
6. The method according to any one of claims 1 to 4, wherein detecting the first gesture type comprises detecting a plurality of touch inputs comprising the touch input.
7. The method according to any one of claims 1 to 6, wherein ceasing to display the image on the touch-sensitive electronic display comprises moving the image off the touch-sensitive electronic display along the other of the first and second axes of the touch-sensitive electronic display.
8. The method according to any one of claims 1 to 7, wherein the second area is coincident with the first area.
9. The method according to any one of claims 1 to 8, comprising:
storing an output display characteristic based on the adjusting the display characteristic of the image; and displaying a subsequent image with a subsequent image display characteristic based on the output display characteristic.
10. The method according to any one of claims 1 to 9, comprising:
storing an output display characteristic based on the adjusting the display characteristic of the image in an image file comprising image data representing the image.
11. The method according to any one of claims 1 to 10, wherein the at least one detected characteristic of the motion of the touch input comprises at least one of:
a length of the touch input; or a direction of the touch input.
12. The method according to any one of claims 1 to 11, wherein the display characteristic comprises at least one of:
a tone mapping strength of a tone mapping applied to the image; a brightness of the image;
a gamma correction strength of a gamma correction applied to the image; or a saturation of the image.
13. The method according to any one of claims 1 to 12, wherein the first axis is a substantially vertical axis and the second axis is a substantially horizontal axis, detecting the first gesture type comprising detecting the larger component of the motion of the touch input along the first axis and detecting the second gesture type comprising detecting the larger component of the motion of the touch input along the second axis.
14. The method according to any one of claims 1 to 13, wherein the image is in an 8-bit JPEG (Joint Photographic Experts Group) format or a more than 8-bit JPEG XT format.
15. A computing system comprising: a computing device; and a touch-sensitive electronic display coupled to the computing device, the touchsensitive electronic display comprising a first axis, a second axis which is orthogonal to the first axis, a first area and a second area, the second area comprising the first area, wherein the computing device comprises: storage;
at least one processor communicatively coupled to the storage; an image displaying module configured to:
display the image on the first area of the touch-sensitive electronic display;
a gesture type detection module configured to:
detect, from a touch input on the second area of the touch-sensitive electronic display, a gesture type which is one of a plurality of detectable gesture types, the plurality of detectable gesture types comprising a first gesture type and a second gesture type, detecting the first gesture type comprising detecting a larger component of motion of the touch input along one of the first and second axes of the touchsensitive electronic display than along the other of the first and second axes of the touch-sensitive electronic display and detecting the second gesture type comprising detecting a larger component of motion of the touch input along the other of the first and second axes of the touch-sensitive electronic display than along the one of the first and second axes of the touch-sensitive electronic display;
a display characteristic adjustment module configured to, if the detected gesture type is the first gesture type:
adjust, during displaying the image on the first area of the electronic display, a display characteristic of the image in dependence on at least one detected characteristic of the motion of the touch input; and an image switching module configured to, if the detected gesture type is the second gesture type:
cease displaying the image on the touch-sensitive electronic display; and display a further image on the touch-sensitive electronic display.
Intellectual
Property
Office
Application No: GB1616720.7
GB1616720.7A 2016-09-30 2016-09-30 Image manipulation Active GB2554668B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1616720.7A GB2554668B (en) 2016-09-30 2016-09-30 Image manipulation
US15/717,134 US11307746B2 (en) 2016-09-30 2017-09-27 Image manipulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1616720.7A GB2554668B (en) 2016-09-30 2016-09-30 Image manipulation

Publications (3)

Publication Number Publication Date
GB201616720D0 GB201616720D0 (en) 2016-11-16
GB2554668A true GB2554668A (en) 2018-04-11
GB2554668B GB2554668B (en) 2022-06-22

Family

ID=57570920

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1616720.7A Active GB2554668B (en) 2016-09-30 2016-09-30 Image manipulation

Country Status (2)

Country Link
US (1) US11307746B2 (en)
GB (1) GB2554668B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11681422B2 (en) * 2010-02-16 2023-06-20 John W. Rowles Methods for a user selectable digital mirror
CN109240572B (en) * 2018-07-20 2021-01-05 华为技术有限公司 Method for obtaining picture, method and device for processing picture
US20220398008A1 (en) * 2020-01-24 2022-12-15 Ming Li Volume Adjusting Gesture and Mistouch Prevention on Rolling Devices
US11675494B2 (en) * 2020-03-26 2023-06-13 Snap Inc. Combining first user interface content into second user interface

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080052945A1 (en) * 2006-09-06 2008-03-06 Michael Matas Portable Electronic Device for Photo Management
US20100293500A1 (en) * 2009-05-13 2010-11-18 International Business Machines Corporation Multi-finger touch adaptations for medical imaging systems
US20110163971A1 (en) * 2010-01-06 2011-07-07 Wagner Oliver P Device, Method, and Graphical User Interface for Navigating and Displaying Content in Context
US20110239155A1 (en) * 2007-01-05 2011-09-29 Greg Christie Gestures for Controlling, Manipulating, and Editing of Media Files Using Touch Sensitive Devices
US20130238724A1 (en) * 2012-03-06 2013-09-12 Apple Inc. Sharing images from image viewing and editing application
US20160062571A1 (en) * 2014-09-02 2016-03-03 Apple Inc. Reduced size user interface

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7065249B2 (en) * 2002-07-25 2006-06-20 Microsoft Corp. System and method for image editing
US8564544B2 (en) * 2006-09-06 2013-10-22 Apple Inc. Touch screen device, method, and graphical user interface for customizing display of content category icons
US8259208B2 (en) * 2008-04-15 2012-09-04 Sony Corporation Method and apparatus for performing touch-based adjustments within imaging devices
US7870496B1 (en) * 2009-01-29 2011-01-11 Jahanzeb Ahmed Sherwani System using touchscreen user interface of a mobile device to remotely control a host computer
US9299168B2 (en) * 2012-03-06 2016-03-29 Apple Inc. Context aware user interface for image editing
WO2013164022A1 (en) * 2012-05-02 2013-11-07 Office For Media And Arts International Gmbh System and method for collaborative computing
US9552067B2 (en) * 2012-06-22 2017-01-24 Apple Inc. Gesture interpretation in navigable zoom mode
US9286706B1 (en) * 2013-12-06 2016-03-15 Google Inc. Editing image regions based on previous user edits
US10430051B2 (en) * 2015-12-29 2019-10-01 Facebook, Inc. Multi-user content presentation system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080052945A1 (en) * 2006-09-06 2008-03-06 Michael Matas Portable Electronic Device for Photo Management
US20110239155A1 (en) * 2007-01-05 2011-09-29 Greg Christie Gestures for Controlling, Manipulating, and Editing of Media Files Using Touch Sensitive Devices
US20100293500A1 (en) * 2009-05-13 2010-11-18 International Business Machines Corporation Multi-finger touch adaptations for medical imaging systems
US20110163971A1 (en) * 2010-01-06 2011-07-07 Wagner Oliver P Device, Method, and Graphical User Interface for Navigating and Displaying Content in Context
US20130238724A1 (en) * 2012-03-06 2013-09-12 Apple Inc. Sharing images from image viewing and editing application
US20160062571A1 (en) * 2014-09-02 2016-03-03 Apple Inc. Reduced size user interface

Also Published As

Publication number Publication date
US20180095647A1 (en) 2018-04-05
US11307746B2 (en) 2022-04-19
GB201616720D0 (en) 2016-11-16
GB2554668B (en) 2022-06-22

Similar Documents

Publication Publication Date Title
US10607377B2 (en) Rendering semi-transparent user interface elements
US11307746B2 (en) Image manipulation
US20110069089A1 (en) Power management for organic light-emitting diode (oled) displays
US8331721B2 (en) Automatic image correction providing multiple user-selectable options
US20100103172A1 (en) System and method for rendering ambient light affected appearing imagery based on sensed ambient lighting
WO2017101250A1 (en) Method for displaying loading progress and terminal
US20070024916A1 (en) Method of displaying wallpaper and apparatus for displaying wallpaper
US10809898B2 (en) Color picker
KR20140143698A (en) Content adjustment in graphical user interface based on background content
TW201403582A (en) Display control method and apparatus for power saving and non-transitory computer-readable recoding medium
WO2022001492A1 (en) Interface processing method and apparatus, electronic device, and computer-readable storage medium
KR101136195B1 (en) Defective pixel management for flat panel displays
KR102374160B1 (en) A method and apparatus to reduce display lag using scailing
US20140198084A1 (en) Method and system for display brightness and color optimization
JP2017187994A (en) Image processing apparatus, image processing method, image processing system, and program
CN106033334B (en) Method and device for drawing interface element
WO2023082861A1 (en) Image processing method, image processor, electronic device and storage medium
CN112071267B (en) Brightness adjusting method, brightness adjusting device, terminal equipment and storage medium
WO2023103702A1 (en) Display control method and apparatus, electronic device, and storage medium
Lin et al. Duet: an OLED & GPU co-management scheme for dynamic resolution adaptation
US9483171B1 (en) Low latency touch input rendering
WO2023082859A1 (en) Image processing method, image processor, electronic device, and storage medium
US10096299B2 (en) Adaptive brightness control for dark display content
CN113674136A (en) Picture rendering method and device and electronic equipment
KR20110137634A (en) Method for generating sketch image and display apparaus applying the same

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20220929 AND 20221005