AU2015271983A1 - System and method for modifying an image - Google Patents

System and method for modifying an image Download PDF

Info

Publication number
AU2015271983A1
AU2015271983A1 AU2015271983A AU2015271983A AU2015271983A1 AU 2015271983 A1 AU2015271983 A1 AU 2015271983A1 AU 2015271983 A AU2015271983 A AU 2015271983A AU 2015271983 A AU2015271983 A AU 2015271983A AU 2015271983 A1 AU2015271983 A1 AU 2015271983A1
Authority
AU
Australia
Prior art keywords
image
distribution
regions
depth
depth values
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2015271983A
Inventor
Nicolas Pierre Marie Frederic Bonnier
Timothy Stephen Mason
Peter Jan Pakulski
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2015271983A priority Critical patent/AU2015271983A1/en
Priority to US15/381,466 priority patent/US10198794B2/en
Publication of AU2015271983A1 publication Critical patent/AU2015271983A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

SYSTEM AND METHOD FOR MODIFYING AN IMAGE A system and method of modifying an image are disclosed. The method comprises the steps of: determining a distribution of depth values associated with a plurality of regions in the image, the image captured using an image capture device (210), determining a distribution of image feature characteristics associated with content in the plurality of image regions (210); selecting an image process from a plurality of image processes for each of the plurality of regions based on the determined distribution of depth values and the determined distribution of image feature characteristics (220) and applying the selected image process to the image to modify a relative perceived depth of the plurality of regions in the image (230). -4/7 Start _(Image acquisition Obtain image band No Image band filled? Yes 430 objects Select image processes for objects 450 Apply selected/ image processes Yes More image data? No 460 Fig. 4 .Resolve inconsistencies ClEnda 10829568_1

Description

1 2015271983 21 Dec 2015
SYSTEM AND METHOD FOR MODIFYING AN IMAGE TECHNICAL FIELD
[0001] The present invention relates generally to the field of image processing, and in particular to the field of modifying images to affect depth perception of the image by humans.
BACKGROUND
[0002] When visual artists such as photographers consider the beauty or visual effect of an image, one of the aspects they may consider is how well the image communicates a sense of depth. The human perception of depth in a conventional still image is influenced by “monocular depth cues”, i.e. aspects of the image that suggest depth even when viewed by a single eye. An example monocular depth cue is occlusion: if a first object obscures vision of a second object, this is a strong cue that the first object is in front of the second object. Other monocular cues are more subtle and may create an impression of depth without the viewer being able to specify how that impression was formed.
[0003] Many visual artists such as photographers want to edit or process a captured image to alter perceived depth of the image. Some arrangements affect the perceived depth of an image by altering monocular cues of the image using image processing. A human observer, when viewing an image of a scene, will have some perception of the scene depth based on depth cues of the image. For example, one monocular depth cue is aerial perspective, which is associated with light scattering in an atmosphere. According to aerial perspective, distant objects are hazier and have less contrast. If the image of the scene is processed as a function of a depth map, such that closer regions have a relative contrast increase and distant regions have a relative contrast decrease than in the original image, the observer perception of the depth of the scene will typically be strengthened. Thus image processing in such arrangements can affect the perceived depth of an image.
[0004] Some arrangements apply image processing to improve the perceived depth of a computer-generated image for a target purpose, such as tracing data paths in three-dimension graphs or reaching for objects. Using full knowledge of the three-dimensional layout of scenes, and the ability to completely rerender the computer-generate scenes, monocular cues such as shadowing can be added or manipulated. Decisions such as whether or not to render a ground 10829259 2 2 2015271983 21 Dec 2015 plane (to allow for the texture gradient monocular depth cue) can be made according to the specific task and scene parameters.
[0005] Existing approaches do not clarify how to determine scene suitability when considering non computer-generated images.
SUMMARY
[0006] It is an object of the present disclosure to substantially overcome, or at least ameliorate, at least one disadvantage of present arrangements.
[0007] A first aspect of the present disclosure provides a computer-implemented method of modifying an image, said method comprising the steps of: determining a distribution of depth values associated with a plurality of regions in the image, the image captured using an image capture device; determining a distribution of image feature characteristics associated with content in the plurality of image regions; selecting an image process from a plurality of image processes for each of the plurality of regions based on the determined distribution of depth values and the determined distribution of image feature characteristics; and applying the selected image process to the image to modify a relative perceived depth of the plurality of regions in the image.
[0008] According to another aspect of the present disclosure, the method further comprises identifying a plurality of objects in the captured image based upon the distribution of the depth values and the distribution of the image feature characteristics.
[0009] According to another aspect of the present disclosure, the method further comprises identifying a plurality of objects in the image based upon alignment of the distribution of the depth values and the distribution of the image feature characteristics.
[0010] According to another aspect of the present disclosure, the method further comprises selecting an image process for each object of the plurality of objects based on the depth values and image feature characteristics associated with the object.
[0011] According to another aspect of the present disclosure, selecting the image process comprises determining an aesthetic target for the image to modify a perceived depth of at least one the plurality of regions. 10829259 2 3 2015271983 21 Dec 2015 [0012] According to another aspect of the present disclosure, the method further comprises determining an aesthetic target of the image based upon the distribution of depth values and the distribution of image feature characteristics.
[0013] According to another aspect of the present disclosure, the method further comprises selecting the image process for each of the plurality of regions based on the aesthetic target.
[0014] According to another aspect of the present disclosure, the distribution of the depth values is determined in relation to each pixel of each of the plurality of regions.
[0015] According to another aspect of the present disclosure, the method further comprises select a plurality of pixels relating to a region or object to which the image process is to be applied.
[0016] According to another aspect of the present disclosure, the selected image process is applied to the selected pixels to modify a relative perceived depth of the selected pixels in the image.
[0017] According to another aspect of the present disclosure, the method further comprises acquiring the image in a scanline manner.
[0018] According to another aspect of the present disclosure, the steps of determining the distribution for the depth values, determining the distribution for the image capture values, selecting the image process, and applying the image process are implemented as each row of the image is acquired.
[0019] According to another aspect of the present disclosure, the image process to modify the relative perceived depth of the plurality of regions to emphasise a subject of the image.
[0020] According to another aspect of the present disclosure, the method further comprises determining a depth map of the depth values for the image.
[0021] According to another aspect of the present disclosure, the method further comprises extracting the image feature characteristics based on a superpixel segmentation of the image. 10829259 2 4 2015271983 21 Dec 2015 [0022] According to another aspect of the present disclosure, the depth values relate to one of physical depth values or perceptual depth values of objects in a scene of the image.
[0023] A further aspect of the present disclosure provides a non-transitory computer readable storage medium, a computer program for modifying an image stored on the storage medium, comprising: code for determining a distribution of depth values associated with a plurality of regions in the image, the image captured using an image capture device; code for determining a distribution of image feature characteristics associated with content in the plurality of image regions; code for selecting an image process from a plurality of image processes for each of the plurality of regions based on the determined distribution of depth values and the determined distribution of image feature characteristics; and code for applying the selected image process to the image to modify a relative perceived depth of the plurality of regions in the image.
[0024] A further aspect of the present disclosure provides an image capture device configured to: capture an image of a scene; determine a distribution of depth values associated with a plurality of regions in the captured image; determine a distribution of image feature characteristics associated with content in the plurality of image regions; select an image process from a plurality of image processes for each of the plurality of regions based on the determined distribution of depth values and the determined distribution of image feature characteristics; and apply the selected image process to the captured image to modify a relative perceived depth of the plurality of regions in the captured image.
[0025] A further aspect of the present disclosure provides a system for modifying an image, the system comprising: an image capture device for capturing an image, processor; and a memory, the memory having instructions thereon executable by the processor to modify the captured image by: determining a distribution of depth values associated with a plurality of regions in the image, the image captured using an image capture device; determining a distribution of image feature characteristics associated with content in the plurality of image regions; selecting an image process from a plurality of image processes for each of the plurality of regions based on the determined distribution of depth values and the determined distribution of image feature characteristics; and applying the selected image process to the image to modify a relative perceived depth of the plurality of regions in the image. 10829259 2 5 2015271983 21 Dec 2015 [0026] A further aspect of the present disclosure provides a method of modifying an image, said method comprising the steps of: determining a distribution of depth values and a distribution of image feature characteristics associated with a plurality of regions in the image, the image captured using an image capture device, wherein an alignment of the distribution of the depth values and the distribution of the image feature characteristics identifies a plurality of objects in the image; selecting, for each object of the identified plurality of objects, an image process from a plurality of image processes based on the on the depth values and image feature characteristics associated with the object; and applying the selected image processes to regions of the image to modify a relative perceived depth in the image with respect to the plurality of objects in the image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0027] One or more embodiments of the invention will now be described with reference to the following drawings, in which: [0028] Fig. 1A shows an image; [0029] Fig. IB shows a depth map corresponding to the image of Fig. 1 A; [0030] Figs. 1C and ID show modified images derived using the photograph of Fig. 1A and the depth map of Fig. IB; [0031] Fig. 2 is a schematic block diagram of a method of modifying an image; [0032] Fig. 3 is a schematic flow diagram illustrating a method of identifying objects in an image as used in the method of Fig. 2; [0033] Fig. 4 is a schematic flow diagram illustrating an alternative method of modifying an image; [0034] Fig. 5 is a schematic flow diagram illustrating a method of selecting an image process as used in the method of Fig. 2; and [0035] Figs. 6A and 6B collectively form a schematic block diagram representation of an electronic device upon which described arrangements can be practised. 10829259 2 6 2015271983 21 Dec 2015
DETAILED DESCRIPTION INCLUDING BEST MODE
[0036] Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
[0037] Fig. 1A shows an example photographic image 120 captured by an image capture device that captures and processes RGBD (RGB denoting the colour channels Red, Green, and Blue of the photographic image, and D denoting a measured depth of a scene) images. The RGBD image capture device has both an image sensor that captures the photographic image 120 of a scene and a depth sensor that measures the depth of the scene to produce a corresponding depth map 150 of the scene, as shown in Fig. IB. In the photographic image 120 of Fig. 1A, a person 130 is visible in the foreground and a tree 140 is visible in the background.
[0038] The depth map 150 is illustrated in the example of Fig. IB such that shorter distances are significantly indicated by lighter colouring (by sparser hatching) and longer distances are significantly indicated by darker colouring (by denser hatching). A first region 160 of the depth map 150, corresponding to the person 130 of the photographic image 120, is coloured lightly, indicating close proximity to the image capture device. A second region 170 of the depth map 150, corresponding to the tree 140 of the photographic image 120, is coloured more darkly, indicating a greater distance from the image capture device.
[0039] A photographer, having captured an image of the scene of the photograph 120, may consider what postprocessing is appropriate to achieve specific aesthetic goals in a resulting post-processed image. For example, if the image is intended as a portrait of the person 130, an aesthetic goal or target might be to emphasise the person, as shown by a modified or processed image 180 in Fig. 1C. However, if the person 130 is facing away from the camera, an aesthetic goal may be to emphasise whatever the person 130 is looking at in the image 120, as shown by a second modified image 190 in Fig. ID, thereby effecting a change in the relative perceived depth of objects in the scene.
[0040] In the arrangements described, a distinction is made between perceived or perceptual depth (i.e. depth as understood by a human viewer of an image) and physical depth (i.e. the actual depth of the scene that the image depicts). There are technologies available for 10829259 2 7 2015271983 21 Dec 2015 measuring the physical depth of a scene. Depth sensors are available that measure the depth of a scene using techniques such as time-of-flight imaging, stereo-pair imaging to calculate object disparities, or imaging of projected light patterns. The (physical) depth can be represented by a spatial array of values called a depth map, where each value of the depth map is a distance between the depth sensor and the nearest surface along a ray. The measurements can be combined with a photographic image of the scene to form a RGBD image such that each pixel of the image has a paired colour value (representing the visible light) and depth value (representing the distance from the viewpoint). Other representations and colour spaces may also be used for an image.
[0041] The perceptual depth of an image can be estimated by extracting one or more monocular depth cues from the image and modelling a combined effect on human perception of depth according to the human visual system. One perceptual depth estimation algorithm first extracts blur caused by the optical system of the camera to produce a local blur map. This blur map is processed by a model of the human visual system. In one arrangement, the model of the human visual system is a spatial filtering, an image convolution by the contrast sensitivity function of the human visual system. The convolution removes information that cannot be perceived by the human visual system. According to another arrangement, the convolution removes information that cannot be perceived by the human visual system, and amplifies information that will be highly perceivable to the human visual system to produce a perceived blur map. Depth is locally estimated from the perceived blur map to produce a perceived depth map. The perceived depth map is mapped into the Fourier domain.
[0042] The captured image may be blurred using convolution with a Gaussian kernel with a standard deviation equal to a predetermined blur radius σ0, forming a reblurred image. An example value for σ0 is 1 pixel. A gradient magnitude of the captured image is divided by a gradient magnitude of the reblurred image, forming a gradient magnitude ratio image. Edge locations in the captured image are detected, for example using Canny edge detection. For each edge location, the gradient magnitude ratio image is used to estimate the blur radius in the captured image using Equation (1) below. σ = oO/sqrt(RA2 - 1) Equation (1) wherein Equation (1) σ is the estimated blur radius, σ0 is the predetermined reblur radius and R is the gradient magnitude ratio image. The result of Equation (1) is a sparse perceptual depth 10829259 2 8 2015271983 21 Dec 2015 map, where the depth is expressed in the form of blur radius amounts. According the arrangements described, an image capture system that includes a depth sensor selects appropriate post-processing to apply to a captured image to modify perceived depth of the image. In the arrangements described, the image capture system is a camera system implemented using a single electronic device, such as the system 600 discussed in relation to Figs. 6A and 6B below. Other implementations of the camera system comprise a stereo-pair of digital SLR cameras operating synchronously as a unit, and associated hardware and software. Stereo-pair disparity is used as a mechanism for measuring depth of elements of a captured image in the corresponding scene. Disparities are measured from locations of objects in the image of the first camera to the locations of the same objects in the image of the second camera. Thus the depth map aligns to object locations in the image of the first camera. The image of the first camera is therefore used as the photographic image. In other arrangements, camera systems that use a different principle for measuring depth, for example a camera system comprising a single digital SLR camera (producing a photographic image) coupled with a time-of-flight depth sensor (producing a depth map).
[0043] Figs. 6A and 6B collectively form a schematic block diagram of a general purpose electronic device 601 including embedded components, upon which the methods to be described are desirably practiced.
[0044] The electronic device 601 is typically a digital camera, such as a digital SLR camera, in which processing resources are limited. The electronic device may also be any electronics device capable of capturing and/or processing or modifying an image and in which processing resources are limited. Nevertheless, the methods to be described may also be performed on higher-level devices such as desktop computers (not shown), server computers (not shown), and other such devices with significantly larger processing resources.
[0045] As seen in Fig. 6A, the electronic device 601, also referred to as an image capture device, comprises an embedded controller 602. Accordingly, the electronic device 601 may be referred to as an “embedded device.” In the arrangements described, the electronic device 601 relates to a digital SLR camera.
[0046] In the present example, the controller 602 has a processing unit (or processor) 605 which is bi-directionally coupled to an internal storage module 609. The storage module 609 may be formed from non-volatile semiconductor read only memory (ROM) 660 and 10829259 2 9 2015271983 21 Dec 2015 semiconductor random access memory (RAM) 670, as seen in Fig. 6B. The RAM 670 may be volatile, non-volatile or a combination of volatile and non-volatile memory.
[0047] The electronic device 601 includes a display controller 607, which is connected to a video display 614, such as a liquid crystal display (LCD) panel or the like. The display controller 607 is configured for displaying graphical images on the video display 614 in accordance with instructions received from the embedded controller 602, to which the display controller 607 is connected.
[0048] The electronic device 601 also includes user input devices 613 which are typically formed by keys, a keypad or like controls. In some implementations, the user input devices 613 may include a touch sensitive panel physically associated with the display 614 to collectively form a touch-screen. Such a touch-screen may thus operate as one form of graphical user interface (GUI) as opposed to a prompt or menu driven GUI typically used with keypad-display combinations. Other forms of user input devices may also be used, such as a microphone (not illustrated) for voice commands or a joystick/thumb wheel (not illustrated) for ease of navigation about menus.
[0049] As seen in Fig. 6A, the electronic device 601 also comprises a portable memory interface 606, which is coupled to the processor 605 via a connection 619. The portable memory interface 606 allows a complementary portable memory device 625 to be coupled to the electronic device 601 to act as a source or destination of data or to supplement the internal storage module 609. Examples of such interfaces permit coupling with portable memory devices such as Universal Serial Bus (USB) memory devices, Secure Digital (SD) cards, Personal Computer Memory Card International Association (PCMIA) cards, optical disks and magnetic disks.
[0050] The electronic device 601 also has a communications interface 608 to permit coupling of the device 601 to a computer or communications network 620 via a connection 621. In arrangements using more than one image capture device, the device 601 may be in communication with another image capture device (not shown) via the network 120. The connection 621 may be wired or wireless. For example, the connection 621 may be radio frequency or optical. An example of a wired connection includes Ethernet. Further, an example of wireless connection includes Bluetooth™ type local interconnection, Wi-Fi 10829259 2 10 2015271983 21 Dec 2015 (including protocols based on the standards of the IEEE 802.11 family), Infrared Data Association (IrDa) and the like.
[0051] In some implementations, the methods described of modifying an image are implemented partially on the electronic device 601 and partially on a remote device such as a server computer (not shown) in communication with the electronic device 601 via the network 620.
[0052] Typically, the electronic device 601 is configured to perform some special function.
The embedded controller 602, possibly in conjunction with further special function components 610, is provided to perform that special function. For example, where the device 601 is a digital camera, the components 610 may represent a lens, focus control, a depth sensor and an image sensor of the camera. In some arrangements, the special components 110 include a depth sensor comprising two image sensors side by side, to measure stereo disparity of a captured image. The special function components 610 is connected to the embedded controller 602.
[0053] As another example, the device 601 may be a mobile telephone handset. In this instance, the components 610 may represent those components required for communications in a cellular telephone environment. Where the device 601 is a portable device, the special function components 610 may represent a number of encoders and decoders of a type including Joint Photographic Experts Group (JPEG), (Moving Picture Experts Group) MPEG, MPEG-1 Audio Layer 3 (MP3), and the like.
[0054] The methods described hereinafter may be implemented using the embedded controller 602, where the processes of Figs. 2 to 4 may be implemented as one or more software application programs 633 executable within the embedded controller 602. The electronic device 601 of Fig. 6A implements the described methods. In particular, with reference to
Fig. 6B, the steps of the described methods are effected by instructions in the software 633 that are carried out within the controller 602. The software instructions may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules manage a user interface between the first part and the user. 10829259 2 11 2015271983 21 Dec 2015 [0055] The software 633 of the embedded controller 602 is typically stored in the non-volatile ROM 660 of the internal storage module 609. The software 633 stored in the ROM 660 can be updated when required from a computer readable medium. The software 633 can be loaded into and executed by the processor 605. In some instances, the processor 605 may execute software instructions that are located in RAM 670. Software instructions may be loaded into the RAM 670 by the processor 605 initiating a copy of one or more code modules from ROM 660 into RAM 670. Alternatively, the software instructions of one or more code modules may be pre-installed in a non-volatile region of RAM 670 by a manufacturer. After one or more code modules have been located in RAM 670, the processor 605 may execute software instructions of the one or more code modules.
[0056] The application program 633 is typically pre-installed and stored in the ROM 660 by a manufacturer, prior to distribution of the electronic device 601. However, in some instances, the application programs 633 may be supplied to the user encoded on one or more CD-ROM (not shown) and read via the portable memory interface 606 of Fig. 6A prior to storage in the internal storage module 609 or in the portable memory 625. In another alternative, the software application program 633 may be read by the processor 605 from the network 620, or loaded into the controller 602 or the portable storage medium 625 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that participates in providing instructions and/or data to the controller 602 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, flash memory, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the device 601. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the device 601 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. A computer readable medium having such software or computer program recorded on it is a computer program product.
[0057] The second part of the application programs 633 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 614 of Fig. 6A. Through manipulation of the user input device 613 (e g., the keypad), a user of the device 601 and the 10829259 2 12 2015271983 21 Dec 2015 application programs 633 may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s).
Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via loudspeakers (not illustrated) and user voice commands input via the microphone (not illustrated).
[0058] Fig. 6B illustrates in detail the embedded controller 602 having the processor 605 for executing the application programs 633 and the internal storage 609. The internal storage 609 comprises read only memory (ROM) 660 and random access memory (RAM) 670. The processor 605 is able to execute the application programs 633 stored in one or both of the connected memories 660 and 670. When the electronic device 601 is initially powered up, a system program resident in the ROM 660 is executed. The application program 633 permanently stored in the ROM 660 is sometimes referred to as “firmware”. Execution of the firmware by the processor 605 may fulfil various functions, including processor management, memory management, device management, storage management and user interface.
[0059] The processor 605 typically includes a number of functional modules including a control unit (CU) 651, an arithmetic logic unit (ALU) 652, a digital signal processor (DSP) 653 and a local or internal memory comprising a set of registers 654 which typically contain atomic data elements 656, 657, along with internal buffer or cache memory 655. One or more internal buses 659 interconnect these functional modules. The processor 605 typically also has one or more interfaces 658 for communicating with external devices via system bus 681, using a connection 661.
[0060] The application program 633 includes a sequence of instructions 662 through 663 that may include conditional branch and loop instructions. The program 633 may also include data, which is used in execution of the program 633. This data may be stored as part of the instruction or in a separate location 664 within the ROM 660 or RAM 670.
[0061] In general, the processor 605 is given a set of instructions, which are executed therein. This set of instructions may be organised into blocks, which perform specific tasks or handle specific events that occur in the electronic device 601. Typically, the application program 633 waits for events and subsequently executes the block of code associated with that event. Events may be triggered in response to input from a user, via the user input devices 613 of Fig. 6 A, as 10829259 2 13 2015271983 21 Dec 2015 detected by the processor 605. Events may also be triggered in response to other sensors and interfaces in the electronic device 601.
[0062] The execution of a set of the instructions may require numeric variables to be read and modified. Such numeric variables are stored in the RAM 670. The disclosed method uses input variables 671 that are stored in known locations 672, 673 in the memory 670. The input variables 671 are processed to produce output variables 677 that are stored in known locations 678, 679 in the memory 670. Intermediate variables 674 may be stored in additional memory locations in locations 675, 676 of the memory 670. Alternatively, some intermediate variables may only exist in the registers 654 of the processor 605.
[0063] The execution of a sequence of instructions is achieved in the processor 605 by repeated application of a fetch-execute cycle. The control unit 651 of the processor 605 maintains a register called the program counter, which contains the address in ROM 660 or RAM 670 of the next instruction to be executed. At the start of the fetch execute cycle, the contents of the memory address indexed by the program counter is loaded into the control unit 651. The instruction thus loaded controls the subsequent operation of the processor 605, causing for example, data to be loaded from ROM memory 660 into processor registers 654, the contents of a register to be arithmetically combined with the contents of another register, the contents of a register to be written to the location stored in another register and so on. At the end of the fetch execute cycle the program counter is updated to point to the next instruction in the system program code. Depending on the instruction just executed this may involve incrementing the address contained in the program counter or loading the program counter with a new address in order to achieve a branch operation.
[0064] Each step or sub-process in the processes of the methods described below is associated with one or more segments of the application program 633, and is performed by repeated execution of a fetch-execute cycle in the processor 605 or similar programmatic operation of other independent processor blocks in the electronic device 601.
[0065] Fig. 2 shows a method 200 of modifying an image. The method 200 is typically a computer-implemented method, implemented as one or more modules of the software application 633, stored in the memory 609 and controlled by execution of the processor 605.
The method 200 is implemented after an image has been captured by the electronic device 601. 10829259 2 14 2015271983 21 Dec 2015 [0066] Referring to Fig. 2, the method 200 starts at an object identification step 210. The application 633 executes at the step 210 to identify objects in the scene of the captured image, such as the person 130 and the tree 140 of the image 120. The identified objects form content of the captured image. A method 300 of identifying objects in a captured image, as implemented at step 210, is described in further detail below in relation to Fig. 3.
[0067] After execution of the object identification step 210, the application 633 progresses under execution of the processor 605 to an image process selection step 220. Execution of the image process selection step 220 selects one or more image processes from a plurality of image processes to apply to objects identified in the step 210 or regions associated with the objects.
[0068] The plurality of image processes includes any image processing technique which can be used to modify an image. The image processes may be used alone or in combination with other image processes applied to other regions in order to modify depth perception of an object or region of a captured image. Relevant image processes typically affect visual properties or characteristics of the image associated with the object, for example affecting colour, shading, focus, and the like. For example, execution of the step 220 may select processes that would emphasise the person 130 and processes that would deemphasise the tree 140 of the image 120. A method 400 of selecting image processes, as executed at step 220, is described hereafter in relation to Fig. 4.
[0069] The method 200 progresses under execution of the processor 605 from the image process selection step 220 to an application step 230. Execution of the application step 230 applies the selected image processes to the captured image, thereby to modify relative perceived depths of the identified objects or regions in the captured image. The selected process may be applied to a region, object or selection of pixels in the image to alter a perceived depth of the region, object or selection of pixels. Upon completion of the application step 230, the method 200 ends at a step 299.
[0070] In the arrangements described, the object identification step 210 is implemented according to the method 300 of Fig. 3. The method 300 is typically implemented as one or more modules of the application 633, stored in the memory 609 and controlled by execution of the processor 605. 10829259 2 15 2015271983 21 Dec 2015 [0071] The method 300 starts at a depth map obtaining step 310. Execution of the step 310 obtains or determines a depth map corresponding to the captured image (such as the depth map 150 of Fig. IB). The depth map is determined by measuring the disparity between a stereo pair of baseline-separated images corresponding to the captured image, such as is performed by commercially available stereo matching software. In other implementations, other methods of obtaining depth maps may be used at the step 310. For example, methods such as using an active time-of-flight depth sensor, analysing blur differences between a pair of images captured using different focus or aperture settings, or analysing blur of a single captured image may be used. The depth map generated at step values 310 typically relates to physical depth values of objects in the scene of the captured image. However, in some arrangements perceptual depth values are determined for the captured image and stored in the depth map in execution of the step 310. The depth map and the corresponding distribution of the depth values are determined in relation to each pixel of the image. The depth map is stored on the electronic device 601 in some arrangements, for example in the memory 609. In some arrangements, the depth map may relate to perceived depth values of the image.
[0072] The method 300 progresses under execution of the processor 605 from the depth map obtaining step 310 to an image feature characteristics extraction step 320. Execution of the step 320 summarises or extracts visual information of the captured image in a form that is amenable to matching visually similar regions. The summarising of the visual information is in some arrangements achieved based on a superpixel segmentation of the captured image to identify compact regions of similar visual appearance. The superpixel segmentation is achieved by assigning an approximate grid of initial candidate points sparsely covering the captured image. Each pixel of the captured image becomes associated with a most similar candidate point according to a dissimilarity measure between that pixel and nearby candidate points. The dissimilarity measure is selected such that visually similar and proximate image regions become associated with the same candidate point.
[0073] To achieve association of visually similar and proximate image regions, the dissimilarity measure has a measurement of local image characteristics about the pixel and the candidate point being considered, and a measure of distance between the pixel and the candidate point being considered. For example, the colour distance in the CIELAB colour space between the pixel and the candidate point is a useful local image characteristic to measure. After each pixel has been associated with a candidate point, the set of pixels associated with each candidate point is called a “superpixel”. To determine the image feature characteristics of the image, the 10829259 2 16 2015271983 21 Dec 2015 extraction step 320 executes to determine local image characteristics within each superpixel. The local image characteristics measure attributes of each superpixel that describe the visual appearance of that superpixel, for example colour and frequency distribution of the superpixel. The local image characteristic used in superpixel segmentation may in some arrangements relate to the depth map corresponding to the captured image.
[0074] The extracted image features are stored in the memory 609 in some implementations.
[0075] The method 300 progresses from the extraction step 320 to a distribution determination step 330. Execution of the distribution determination step 330 determines a spatial distribution of depth values for a plurality of regions in the image and a spatial distribution of image feature characteristics associated with content of the captured image in the plurality of regions. The spatial distribution of the depth values is determined using the depth map obtained in the depth map obtaining step 310. The spatial distribution of the image feature characteristics is determined using the local image characteristics from the image feature characteristics extraction step 320. Determining the spatial distribution of the depth values and the image feature characteristics may in some implementations involve rescaling or otherwise registering or calibrating one or both of the depth map and image feature characteristics. Accordingly, the depth values of the depth map correspond to image feature characteristics of the same scene regions.
[0076] The determined spatial distribution of the depth values and the image feature characteristics may be stored on the device 601, for example on the memory 609.
[0077] The method 300 continues under execution of the processor 605 from the distribution determination step 330 to a common discontinuity identification step 340. Execution of the step 340 identifies likely object boundaries in both the spatial distribution of depth values and the spatial distribution of image feature characteristics. Rapid changes in depth or image feature characteristics are identified as being potential object boundaries. For example, Canny edge detection can be applied to a spatial distribution to identify edge locations, which are indicative of rapid changes in that distribution. Each such edge implies a potential object boundary.
When such potential object boundaries occur at the same location according to depth locations and image feature characteristics locations, the location is identified as a common discontinuity. For example, a common discontinuity can be identified by a sufficiently large response from a locally windowed cross-correlation between the edges of the spatial distribution of depth values 10829259 2 17 2015271983 21 Dec 2015 and of a spatial distribution of image feature characteristics. The central location of the local window indicates the location of the common discontinuity. Objects are segmented using the common discontinuities to denote object boundaries, such that objects are identified in the captured image. The step 340 accordingly aligns the distribution of the depth values and the distribution of the image feature characteristics to identify at least two objects of the captured image based on the alignment. In identifying objects and boundaries using the alignment, the method 300 operates at step 340 to identify a plurality of objects in the captured image based upon the distribution of the depth values and the distribution of the image feature characteristics.
[0078] The object identification method 300 continues from step 340 to a step 399 and ends.
[0079] In the arrangements described, the image process selection step 220 is implemented using the method 500 of Fig. 5. The method 500 is typically implemented as one or more modules of the application 633, stored in the memory 609 and controlled by execution of the processor 605.
[0080] The method 500 begins at an aesthetic target determining step 510. Execution of aesthetic target determining step 510 determines a target aesthetic outcome appropriate for the captured image. For example, a target aesthetic may be to emphasise a particular object and deemphasise other objects of the captured region. Another determined aesthetic target may be to exaggerate, caricaturise or otherwise modify a perceived depth of a particular object or region of the captured image. To determine an appropriate aesthetic target, the application 633 executes to analyse the spatial distribution of the depth values and the spatial distribution of the image feature characteristics determined by the distribution step 330 (Fig. 3) for the objects identified by the object identification step 210 (Fig. 2). In order to determine an appropriate aesthetic target, the application 633 executes to gather parameters about the scene of the captured image that relate to the layout of objects in the scene and relationships of the objects to each other. The scene parameters typically comprise data that could be useful for understanding the photograph style and composition of the photograph, but the application 633 does not classify the captured image into a photographic style explicitly. The application 633 instead uses such data to learn which target aesthetics are appropriate for a given scene layout. 10829259 2 18 2015271983 21 Dec 2015 [0081] Gathering the scene parameters typically comprises: - A labelling of objects with semantically meaningful labels such as “tree”, “ground”, “sky”, “person” and so forth. The labelling is achieved by using image understanding techniques such as graphical models on the spatial distributions produced by the distribution determination step 330. Graphical models may be used in this regard, such as Markov random fields. Such graphical models typically operate by learning a combination of a) the local spatial distributions commonly associated with specific object labels (e g. this spatial distribution typically is labelled as a “person”), and b) common neighbourhood relationships between object labels (e.g. the spatial distribution of a “wall” typically adjoins the spatial distribution of a “ceiling” in certain directions). The learning is performed in a supervised manner. Specifically, a database is stored of images for which objects have been identified according to execution of the object identification step 210. The associated spatial distributions are determined by the execution of distribution calculation step 330. The objects in the image database are manually labelled by human observers, effectively “supervisors” into semantic labels, such that an association between the spatial distributions and the semantic labels is learned by the application 633. - Determining visual salience of regions of the image (that is, a relative measure of how eye-catching various regions of the image are from the point of view of a human observer). The visual salience is estimated using published, established techniques, and provides insight into the artistic intent of the photographer. - Determining auxiliary information about a state of operation of the electronic device 601, such as whether the electronic device 601 is currently (or at a time of image capture) using a tracking autofocus mode (used to capture images of moving objects).
[0082] After the scene parameters have been gathered, an aesthetic target is determined by execution of the application 633 using instance-based machine learning techniques, such as k-nearest neighbours, trained using example aesthetic targets provided by experienced photographers. In more detail, the application 633 is trained using known machine learning or training techniques. For example, training data may be collected by presenting an experienced photographer with a large collection of images, and asking the photographer to select a good aesthetic target for each image and responses of the photographer recorded and stored in the 10829259 2 19 2015271983 21 Dec 2015 memory 109. The application 633 associates the responses of the photographer with parameters describing the layout depicted in the respective images, such as the visual salience described heretofore. The instance-based learning techniques allow the application 633 to infer an appropriate aesthetic target for a new image based on known images with similar layout parameters and the gathered parameters.
[0083] In the arrangements described above, the aesthetic target is determined based upon the distribution of depth values and the distribution of image feature characteristics. In some arrangements, the application 633 identifies a plurality of appropriate aesthetic targets and presents the identified aesthetic targets to the user using the display 614 of the electronic device 601. For example, a predetermined number of aesthetic targets may be determined or identified using the methods described above and each identified target displayed to the user in sequence. The user can operate the GUI of the device 601 by manipulating the inputs 613 identify a desired aesthetic target. The identified aesthetic target is used as the determined aesthetic target.
[0084] After execution of the aesthetic target determining step 510, the method 500 progresses under execution of the processor 605 to an object processing strategy determination step 520. Execution of the object processing strategy determination step 520 identifies a processing goal for each object as a consequence of the aesthetic target selected at the step 510 based upon perceived relative depth. In this regard, the step 520 typically references the determined depth distribution. For example, for the photographic image 120 of Fig. 1 A, if the determined aesthetic target emphasises the photographic subject, the photographic subject being the person 130. In this example, the processing strategy for the person 130 is to increase the perceived depth between the person 130 and non-subject objects such as the tree 140 by inducing the perception that the person 130 is closer to the image capture device 601. The processing strategy for non-subject objects such as the tree 140 is to increase the perceived depth between that non-subject object (the tree 140) and the person 130 by inducing the perception that that the tree 140 object is further from the image capture device 601 than the person 130. As a result of the example processing strategies described above, the aesthetic target of emphasising the person 130 can be obtained.
[0085] The method 500 progresses under execution of the processor 605 from the object processing strategy selection step 520 to an object processing selection step 530. In execution of the step 530, the application 633 determines or selects specifically what image process of the plurality of image processes to apply to and about each object or region, in order to achieve the 10829259 2 20 2015271983 21 Dec 2015 determined processing strategy for that object. The step 530 operates to select an image process for each identified object or region of the image, based on the depth values and image feature characteristics associated with the object or region. The method 530 may be considered to select the image processes based on the aesthetic target.
[0086] In some arrangements, the determined image processing is responsive to the depth on and about the object and the processing strategy for the object. For example, increasing colour saturation of an object can make the object appear perceptually closer. Referring to the example of Fig. 1A, the target processing for the person 130 can involve increasing colour saturation of the person 130 according to a physical depth of the person 130 in the image 120. In arrangements where the step 310 determines a perceptual depth map, the selected image process can be determined based upon the perceived depth of the person 130 in the image 120.
[0087] The aesthetic target and the processing strategy relate to a plurality of regions of the image. Each of the plurality of regions may correlate directly to a plurality of objects determined at step 340 of the method 300. Alternatively the regions may relate to a plurality of portions of objects identified at step 340, or regions at least partially including, but not limited to, the identified objects. As the image process is based upon the selected aesthetic effect and the object processing strategy, an image processing step is selected for each object or relevant region of the image based upon the determined distribution of the depth values and the determined distribution of image feature characteristics for the plurality of regions. Each region relates to a number of pixels of the image. In selecting the aesthetic effect, the application 633 executes to select a plurality of pixels relating to a region or object to which the image process is to be applied.
[0088] After processes for each object have been determined by execution of step 530, the method 500 ends, effectively ending the image process selection step 220 of Fig. 2. Upon completion of the step 220, the determined processes are applied in the image process application step 230 of the method 200 that produces a modified image according to the aesthetic target.
[0089] An alternative arrangement for modifying an image relates to selecting appropriate postprocessing to apply to a captured image that is acquired in a scanline manner (that is acquired one row at a time, with a top row first and a following row thereafter, and so forth) in an ongoing manner as the image data is acquired. A method 400 of modifying an image for such 10829259 2 21 2015271983 21 Dec 2015 an arrangement is shown in Fig. 4. The method 400 is may be implemented as one or more modules of the application 633, stored in the memory 609 and controlled by execution of the processor 605. In some arrangements, the method 400 may be implemented on a server computer in communication with the electronic device 601. In such implementations, the image capture device 601 captures the image and transmits the image to the server computer, for example via the network 620.
[0090] The method 400 starts at an image acquisition loop start step 410. The image acquisition loop executes to obtain a new row of the image as the new row becomes available, and ends at a check step 415, as described below, when that the new row has been obtained and processed. The method 400 repeatedly iterates through an image acquisition loop starting at the step 410 until the entire image has been obtained and processed.
[0091] Upon the acquisition loop start step 410 acquires a new image row, the method 400 continues to an image band obtaining step 420. Execution of the image band obtaining step 420 inserts the new row into a row buffer containing a fixed number of contiguous image rows. The fixed number is less than the number of rows required to store the entire image. The row buffer may be stored on the device 601, for example in the memory 609. The row buffer provides some amount of image context to subsequent steps of the method 400, the subsequent steps operating on the row buffer rather than waiting for the entire image to be fully acquired. The row buffer is typically implemented using a data structure such as a circular buffer. A circular buffer provides a facility for replacing only the oldest row with a newly acquired row once the buffer has been filled.
[0092] The method 400 continues under execution of the processor 605 from the image band obtaining step 420 to a check step 425. The check step 425 executes to determine if the image band is full to ensure that the row buffer is fully loaded with image rows before further processing begins. If the row buffer is not yet filled (“No” at the check step 425), the method 400 proceeds under execution of the processor 605 to the check step 415 to await more image data becoming available. If the row buffer is filled (“Yes” at the check step 425), the method 400 proceeds under execution of the processor 605 to an object identification step 430.
[0093] The object identification step 430 executes to identify objects in the contents of the row buffer. The object identification executed at the step 430 is performed similarly to the method 300 described above with reference to Fig. 3. However there are some differences between the 10829259 2 22 2015271983 21 Dec 2015 object identification executed at step 430 and the method 300, as discussed below. The application 633 executes to obtain the depth map of the captured image, as contained within the row buffer, at step 430 when each row of the captured image is received. Image feature characteristics are extracted from the row buffer at step 320 as each row is received. The step 330 executes to determine the spatial distribution of the depth value and the spatial distribution of the image feature characteristics for the image as contained within the row buffer including the latest received row. Step 340 identifies common continuities of distributions for the image as contained within the row buffer including the latest received row. In contrast to execution of the method 300 at step 210, locations, boundary positions and identifying characteristics of objects identified in the step 430 are stored for reuse during later iterations of the image acquisition loop, for example in the memory 609. Storing the locations, boundary positions and identifying characteristics of identified objects allows later iterations to have an effective context spanning all rows that have previously been stored in the row buffer, even if some of the previously stored rows have since been overwritten by new rows. Further, the common discontinuity identification step 340 implemented at step 430 is typically more tolerant to discontinuities such as missing top and bottom rows compared to implementation of the step 210. The tolerance to discontinuities is typically increased at step 430 as occurrence of discontinuities is more likely when there is a reduced amount of image context available (for example, during earlier iterations of the image acquisition loop).
[0094] The method 400 continues under execution of the processor 605 from the object identification step 430 to an image process selection step 440. Execution of the image process selection step 440 selects image processes to apply to each object identified in the step 430. The step 440 executes similarly to the method 500. However there are some differences between implementation of the step 440 and the method 500.
[0095] In particular, setting an appropriate aesthetic target may be difficult with only a limited amount of image context. The difficulty is generally addressed in two ways. Firstly, the aesthetic target setting step 510 is trained using partial context images as well as full images, and the amount of context is used as a layout parameter. Such allows the electronic device 601 to infer appropriate aesthetic targets in light of reduced context. Further, a coarse layout is analysed prior to image capture using a low resolution capture from the incidental use of the electronic device 601. For example, when the user frames a shot using live view functionality or with an electronic viewfinder enabled, the application 633 executes to capture a low resolution image used to pre-emptively analyse layout of the scene of the image to be captured. 10829259 2 23 2015271983 21 Dec 2015
In situations where the application 633 executes to identify that the image layout is the same as the low resolution image layout, the pre-emptive analysis is used for aesthetic target setting.
[0096] The method 400 proceeds under execution of the processor 605 from the image process selection step 440 to an image process application step 450. Execution of the step 450 produces a modified image band that can be immediately presented to the user (for example via the display 614) or transmitted to another device such as to a memory card or to a computing device (for example via the connection 621 and the network 620).
[0097] For each consecutive pair of iterations of the image acquisition loop where processing occurs at the step 450, there is a large overlap in the content of the row buffer. Therefore in some cases there should be a great similarity between the outputs of the image process application step 450 for consecutive iterations of the image acquisition loop. The application 633 executes to determine any relatively large differences between the outputs of the image process application step 450 for consecutive iterations to determine a possible failure due to insufficient image context.
[0098] Following the image process application step 450, the method 400 reaches the check step 415 of the image acquisition loop. The check step 415 of the acquisition loop determines whether there is more image data yet to be processed. If more image data is to be processed (“Y” at the step 415), the method 400 returns to the step 410. Effectively, the steps of the method 200 (adjusted as described above), including determining the distribution for the depth values and determining the distribution for the image capture values (step 210), selecting the image process (step 220), and applying the image process (230) are implemented as each row of the scanline image is acquired [0099] If no more image row data is to be obtained (“N” at the step 415), the method 400 proceeds under execution of the processor 605 to an inconsistency resolving step 460. The inconsistency resolving step 460 executes to investigate any relatively large differences (for example, differences over a predetermined threshold) determined in execution the image process application step 450. The error inconsistency resolving step 460 uses the entire image context that is available to determine whether the differences are a result of insufficient context during processing. 10829259 2 24 2015271983 21 Dec 2015 [00100] Examples of failure cases due to insufficient context include: an object that has a change in object processing strategy in different iterations of the image acquisition loop, a failure in object identification at execution of the step 430, and the like. If a failure is identified at the step 460, the application 633 executes to recalculate the output for image regions in relation to the failure location. For example, the application 633 may execute to repeat (not shown) a number of the steps (for example steps 430 to 450) of the method 400 to re-select image processes for any regions indicting a failure location. For instance, if the top 20% of image rows of an object had a first object processing strategy and the bottom 80% of image rows of that object had a second object processing strategy, the error inconsistency resolving step 460 may execute to conclude that an error has occurred as a result of insufficient context during processing. Such can occur because the processing is performed top-down, so the top part of an object has less context available than the bottom part of an object. Further, the bottom part of the object has a stable object processing strategy, which indicates that the object processing strategy stabilised with sufficient context. As a result, the application 633 may execute to repeat the image process selection step 440 for the image rows containing the top 20% of that object, forcing the second object processing strategy instead of the first object processing strategy. Then the application 633 may execute to repeat the image process application step 450 for the image rows containing the top 20% of that object, producing rectified modified image rows. The application 633 may then take additional steps to rectify the context error as appropriate. For example, if the modified image is being presented to the user on the display 614, the application 633 will render the rectified modified image rows instead of the modified image rows that had a context error.
[00101] Upon execution of the inconsistencies resolving step 460, the method 400 proceeds to a step 499 and ends. Upon reaching the end of the method 400, the application 633 has generated or produced a modified image according to the aesthetic target.
[00102] The arrangements described are applicable to the computer and data processing industries and particularly for the image processing and photography industries.
[00103] In selecting the image processes based upon the determined distribution of depth values and the determined distribution of image feature characteristics, the arrangements described provide a means of modifying perceived depth of object or regions in the image based upon an aesthetic effect which a photographer is likely to want to achieve. The arrangements describe effectively assess a scene of a captured image and determine scene 10829259 2 25 2015271983 21 Dec 2015 suitability and appropriate image processes to modify perceived depth of objects or regions in the image.
[00104] The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
[00105] In the context of this specification, the word “comprising” means “including principally but not necessarily solely” or “having” or “including”, and not “consisting only of’. Variations of the word "comprising", such as “comprise” and “comprises” have correspondingly varied meanings. 10829259 2

Claims (20)

1. A computer-implemented method of modifying an image, said method comprising the steps of: determining a distribution of depth values associated with a plurality of regions in the image, the image captured using an image capture device; determining a distribution of image feature characteristics associated with content in the plurality of image regions; selecting an image process from a plurality of image processes for each of the plurality of regions based on the determined distribution of depth values and the determined distribution of image feature characteristics; and applying the selected image process to the image to modify a relative perceived depth of the plurality of regions in the image.
2. The method of claim 1, further comprising identifying a plurality of objects in the captured image based upon the distribution of the depth values and the distribution of the image feature characteristics.
3. The method of claim 1, further comprising identifying a plurality of objects in the image based upon alignment of the distribution of the depth values and the distribution of the image feature characteristics.
4. The method according to claim 3, further comprising selecting an image process for each object of the plurality of objects based on the depth values and image feature characteristics associated with the object.
5. The method according to claim 1, wherein selecting the image process comprises determining an aesthetic target for the image to modify a perceived depth of at least one the plurality of regions.
6. The method according to claim 1, further comprising determining an aesthetic target of the image based upon the distribution of depth values and the distribution of image feature characteristics.
7. The method according to claim 6, further comprising selecting the image process for each of the plurality of regions based on the aesthetic target.
8. The method according to claim 1, wherein the distribution of the depth values is determined in relation to each pixel of each of the plurality of regions.
9. The method according to claim 1, further comprising select a plurality of pixels relating to a region or object to which the image process is to be applied.
10. The method according to claim 9, wherein the selected image process is applied to the selected pixels to modify a relative perceived depth of the selected pixels in the image.
11. The method according to claim 1, further comprising acquiring the image in a scanline manner.
12. The method according to claim 11, wherein the steps of determining the distribution for the depth values, determining the distribution for the image capture values, selecting the image process, and applying the image process are implemented as each row of the image is acquired.
13. The method according to claim 1, wherein the image process to modify the relative perceived depth of the plurality of regions to emphasise a subject of the image.
14. The method according to claim 1, further comprising determining a depth map of the depth values for the image.
15. The method of claim 1, further comprising extracting the image feature characteristics based on a superpixel segmentation of the image.
16. The method of claim 1, wherein the depth values relate to one of physical depth values or perceptual depth values of objects in a scene of the image.
17. A non-transitory computer readable storage medium, a computer program for modifying an image stored on the storage medium, comprising: code for determining a distribution of depth values associated with a plurality of regions in the image, the image captured using an image capture device; code for determining a distribution of image feature characteristics associated with content in the plurality of image regions; code for selecting an image process from a plurality of image processes for each of the plurality of regions based on the determined distribution of depth values and the determined distribution of image feature characteristics; and code for applying the selected image process to the image to modify a relative perceived depth of the plurality of regions in the image.
18. An image capture device configured to: capture an image of a scene; determine a distribution of depth values associated with a plurality of regions in the captured image; determine a distribution of image feature characteristics associated with content in the plurality of image regions; select an image process from a plurality of image processes for each of the plurality of regions based on the determined distribution of depth values and the determined distribution of image feature characteristics; and apply the selected image process to the captured image to modify a relative perceived depth of the plurality of regions in the captured image.
19. A system for modifying an image, the system comprising: an image capture device for capturing an image, a processor; and a memory, the memory having instructions thereon executable by the processor to modify the captured image by: determining a distribution of depth values associated with a plurality of regions in the image, the image captured using an image capture device; determining a distribution of image feature characteristics associated with content in the plurality of image regions; selecting an image process from a plurality of image processes for each of the plurality of regions based on the determined distribution of depth values and the determined distribution of image feature characteristics; and applying the selected image process to the image to modify a relative perceived depth of the plurality of regions in the image.
20. A method of modifying an image, said method comprising the steps of: determining a distribution of depth values and a distribution of image feature characteristics associated with a plurality of regions in the image, the image captured using an image capture device, wherein an alignment of the distribution of the depth values and the distribution of the image feature characteristics identifies a plurality of objects in the image; selecting, for each object of the identified plurality of objects, an image process from a plurality of image processes based on the on the depth values and image feature characteristics associated with the object; and applying the selected image processes to regions of the image to modify a relative perceived depth in the image with respect to the plurality of objects in the image.
AU2015271983A 2015-12-18 2015-12-21 System and method for modifying an image Abandoned AU2015271983A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2015271983A AU2015271983A1 (en) 2015-12-21 2015-12-21 System and method for modifying an image
US15/381,466 US10198794B2 (en) 2015-12-18 2016-12-16 System and method for adjusting perceived depth of an image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2015271983A AU2015271983A1 (en) 2015-12-21 2015-12-21 System and method for modifying an image

Publications (1)

Publication Number Publication Date
AU2015271983A1 true AU2015271983A1 (en) 2017-07-06

Family

ID=59249024

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2015271983A Abandoned AU2015271983A1 (en) 2015-12-18 2015-12-21 System and method for modifying an image

Country Status (1)

Country Link
AU (1) AU2015271983A1 (en)

Similar Documents

Publication Publication Date Title
CN108764091B (en) Living body detection method and apparatus, electronic device, and storage medium
US11756223B2 (en) Depth-aware photo editing
US10205896B2 (en) Automatic lens flare detection and correction for light-field images
US9569854B2 (en) Image processing method and apparatus
US8983176B2 (en) Image selection and masking using imported depth information
US9307222B1 (en) Configuration settings of a digital camera for depth map generation
US10026183B2 (en) Method, system and apparatus for determining distance to an object in a scene
US20230260145A1 (en) Depth Determination for Images Captured with a Moving Camera and Representing Moving Features
JP2020536327A (en) Depth estimation using a single camera
US9129435B2 (en) Method for creating 3-D models by stitching multiple partial 3-D models
US20170018088A1 (en) Three dimensional content generating apparatus and three dimensional content generating method thereof
US9756261B2 (en) Method for synthesizing images and electronic device thereof
US20160063715A1 (en) Method, system and apparatus for forming a high resolution depth map
US10148895B2 (en) Generating a combined infrared/visible light image having an enhanced transition between different types of image information
US9536321B2 (en) Apparatus and method for foreground object segmentation
JP2018510324A (en) Method and apparatus for multi-technology depth map acquisition and fusion
US11070717B2 (en) Context-aware image filtering
AU2013263760A1 (en) Method, system and apparatus for determining a depth value of a pixel
KR20160021607A (en) Method and device to display background image
US20140198177A1 (en) Realtime photo retouching of live video
CN103543916A (en) Information processing method and electronic equipment
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
AU2016273979A1 (en) System and method for adjusting perceived depth of an image
AU2016273984A1 (en) Modifying a perceptual attribute of an image using an inaccurate depth map
CN110177216A (en) Image processing method, device, mobile terminal and storage medium

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application