AU2014277652A1 - Method of image enhancement based on perception of balance of image features - Google Patents

Method of image enhancement based on perception of balance of image features Download PDF

Info

Publication number
AU2014277652A1
AU2014277652A1 AU2014277652A AU2014277652A AU2014277652A1 AU 2014277652 A1 AU2014277652 A1 AU 2014277652A1 AU 2014277652 A AU2014277652 A AU 2014277652A AU 2014277652 A AU2014277652 A AU 2014277652A AU 2014277652 A1 AU2014277652 A1 AU 2014277652A1
Authority
AU
Australia
Prior art keywords
image
image area
area
adjustment
reference image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2014277652A
Inventor
Veena Murthy Srinivasa Dodballapur
Thai Quan Huynh-Thu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2014277652A priority Critical patent/AU2014277652A1/en
Publication of AU2014277652A1 publication Critical patent/AU2014277652A1/en
Abandoned legal-status Critical Current

Links

Abstract

Abstract METHOD OF IMAGE ENHANCEMENT BASED ON PERCEPTION OF BALANCE OF IMAGE FEATURES A method of modifying an image is disclosed. The image is segmented to form an 5 adjustment image area and a reference image area. A feature value of image data in the reference image area is determined. A ratio of the size of the adjustment image area and the reference image area is determined. Image data in the adjustment image area is modified, according to a function of the feature value of image data in the reference image area and the determined size ratio. OAQQ-7r,,1 ID12AflQ)2 Cr-i Ae EilAr -2/17 e rSta rt 135n 15 200 Access Image Segment image Segmented13 Image Assign reference and adjustment areas 230 Adjustment Reference 9Image area Image area Determine Determine ratio - representative value 255 ZFeature 245 260 value 270 ratio qDetermine fnctonDeeln of balance model Balance15 Fig 2 model 230 Modify Modified o adjustment Image End image area 9488894_1

Description

METHOD OF IMAGE ENHANCEMENT BASED ON PERCEPTION OF BALANCE
OF IMAGE FEATURES
TECHNICAL FIELD
The current invention relates generally to image processing and, in particular, to image enhancement and subjective quality of images as perceived by humans. The present invention also relates to a method and apparatus for modifying an image, and to a computer program product including a computer readable medium having recorded thereon a computer program for modifying an image.
BACKGROUND
The term “photorealistic” distinguishes between the appearance of an image which seems natural or unmodified, as opposed to an image which appears to have been overtly modified, or has content which appears to be overtly synthetic or artificial.
High dynamic range (HDR) imaging technology allows an image of a real-world scene with very high dynamic range to be captured, with resulting image capturing information in strong highlights and deep shadows of the scene. The high dynamic range (HDR) imaging technology allows photographers to capture a greater range of tonal details by capturing a greater dynamic range between the darkest and lightest areas of the image. Capturing a greater range of tonal details allows images with tonal details closer to what the human visual system can perceive to be produced. However, capturing such images with a photorealistic appearance is a difficult problem. A conventional method of photographing a scene with high dynamic range, with the goal of reproducing an image (i.e., viewing the image on a display or printing the image), consists of two steps: (1) capture: acquiring radiance (luminance) information from the real-world scene and (2) display (rendering) or reproduction: adapting the dynamic range of the captured information to fit the captured information into the dynamic range of the display device or printed media. These two steps can be performed separately.
Image capturing devices, including digital cameras (e.g., digital single lens reflex (DSLR) cameras, point-and-shoot cameras), smartphones and tablets, have a sensor with a fixed and limited dynamic range. Dynamic range is a measure of the difference between the brightest area measured by the sensor without saturating the pixels (and losing the image data), and the darkest area measured by the sensor where there is enough light to differentiate the area from pure black. Dynamic range is typically measured in “F-stops” or EV values, where an increase of one “stop” equals a doubling of the brightness. The human eye can distinguish up to about fourteen (14) stops. Some new cameras have sensors that can capture a similar dynamic range under optimal conditions. However, it is typical for a camera to only capture up to ten or eleven stops. The dynamic range of a camera sensor is therefore too small to capture scenes with high dynamic range (HDR), such as scenes with strong highlights and deep shadows, or capture a scene with very large differences in lighting conditions between different areas of the scene. Using conventional imaging sensors to capture an image of a scene with high dynamic range, some parts of the image will be either over-exposed or under-exposed. Capturing an image of a scene with a high dynamic range usually requires capturing several images of the same scene with different exposure settings and combining the images into a high dynamic range (HDR) image.
Even if a high-dynamic range sensor is able to capture a scene with high dynamic range, the display or reproduction of the captured visual information is difficult. Display devices (e g. camera liquid crystal display (LCD) screens, televisions, computer monitors) have a limited dynamic range of about six to ten (6-10) stops. Prints have an even smaller dynamic range, usually in the order of six (6) stops.
Rendering of a high dynamic range image of a scene on a conventional display or reproduction of such a scene on a print medium requires dynamic range compression or adjustment of the information captured by a camera sensor. The dynamic range compression process is often termed tone mapping. Tone mapping affects the appearance of information captured by the camera sensor. A linear compression is non-optimal because linear compression leads to disappearance of tonal details either in high luminance levels (highlights) and/or low luminance levels (shadows).
Tone mapping consists in image contrast adaptation to fit the dynamic range of the high dynamic range (HDR) image into smaller dynamic range rendering capability of the display device or the printing device, while trying to preserve the details of the high dynamic range (HDR) image such as highlights and shadows. However, the adaptation process can remove the details of the high dynamic range (HDR) image or create non-photorealistic results.
State-of-the-art tone mapping algorithms apply global or local contrast changes in the image while compressing the dynamic range of the image to one of the display or print.
However, a typical result of such an adaptation process is an image that is not natural or not photorealistic. For example, the image may look overtly artificial because of too harsh contrast, too saturated colours. The image may also be too bright or too dark. The image may also have colour cast, unnatural colours or un-realistic lighting appearance.
For tone mapping or tone conversion of an image, a tone curve that varies with the characteristics of the image may be used instead of using a pre-defined fixed tone curve. One conventional tone mapping method determines a tone curve by determining a plurality of reference tone curves computed for a plurality of image regions, and computing a weighted average of the respective reference tone curves. The resulting weight-averaged tone curve is then applied globally to the image. However, a disadvantage of this known method is that the method applies a global modification to the image and therefore can produce unrealistic images or images with an artificial appearance. Adapting the entire image may be undesirable as adapting the entire image can create an image with an unnatural aesthetic appearance. For example, natural objects and backgrounds (e.g. trees, plants, ocean, sky...) have pre-defined appearances or representations in the human brain and changing the contrast or luminance of the entire image may create an artificial appearance of such natural objects and backgrounds. Furthermore, the lighting across the image or in different areas of the image needs to be consistent with the subjective expectation of humans to produce a photorealistic image.
Another tone mapping method retains the natural quality of a high-dynamic range scene by creating a new image. The new image is created by selectively combining contrast adapted and non-contrast adapted versions of the same scene. The non-contrast adapted version is a standard non-HDR image. The contrast adapted version is obtained by either processing HDR images or by processing the standard image. However, a disadvantage of such a tone mapping method, which creates a new image, is that the method requires at least two different images of the same scene.
SUMMARY
It is an object of the present invention to overcome, or at least ameliorate, one or more disadvantages of existing prior art.
Disclosed are arrangements which seek to address the above problems by providing a method of modifying an image to produce an image with a photorealistic aspect as perceived by human observers.
According to one aspect of the present disclosure, there is provided a method of modifying an image, said method comprising the steps of: segmenting the image to form an adjustment image area and a reference image area; determining a feature value of image data in the reference image area; determining a ratio of the size of the adjustment image area and the reference image area; modifying image data in the adjustment image area, according to a function of the feature value of image data in the reference image area and the determined size ratio.
According to another aspect of the present disclosure, there is provided a system for modifying an image, said system comprising: a memory for storing data and a computer program; a processor coupled to said memory for executing said computer program, said computer program comprising instructions for: segmenting the image to form an adjustment image area and a reference image area; determining a feature value of image data in the reference image area; determining a ratio of the size of the adjustment image area and the reference image area; and modifying image data in the adjustment image area, according to a function of the feature value of image data in the reference image area and the determined size ratio.
According to still another aspect of the present disclosure, there is provided an apparatus for modifying an image, said apparatus comprising: means for segmenting the image to form an adjustment image area and a reference image area; means for determining a feature value of image data in the reference image area; means for determining a ratio of the size of the adjustment image area and the reference image area; and means for modifying image data in the adjustment image area, according to a function of the feature value of image data in the reference image area and the determined size ratio.
According to one aspect of the present disclosure, there is provided a computer readable medium having a computer program stored thereon for modifying an image, said program comprising: code for segmenting the image to form an adjustment image area and a reference image area; code for determining a feature value of image data in the reference image area; code for determining a ratio of the size of the adjustment image area and the reference image area; code for modifying image data in the adjustment image area, according to a function of the feature value of image data in the reference image area and the determined size ratio.
Other aspects of the invention are also disclosed.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the invention will now be described with reference to the following drawings, in which:
Fig. 1 is a schematic flow diagram showing a method of modifying an image;
Fig. 2 is a schematic flow diagram of showing a method of modifying an image, as executed in the method of Fig. 1;
Fig. 3 A is an example segmented image, with one reference image area and one adjustment image area;
Fig. 3B is another example segmented image;
Fig. 4 is a schematic flow diagram showing a method of determining a ratio value, as executed in the method of Fig. 2;
Fig. 5 is a schematic flow diagram showing a method of determining a representative feature value, as executed in the method of Fig. 2;
Fig. 6 is an example of a scene type look-up-table;
Fig. 7 is a schematic flow diagram showing a method of modifying an adjustment area, as executed in the method of Fig. 2;
Fig. 8 A shows an example of a segmented image, determined using the method of Fig. i;
Fig. 8B shows an example of the segmented image of Fig. 8 A after modification;
Fig. 9 is a schematic flow diagram showing a method of modifying an image;
Fig. 10A shows an example of a segmented image determined using the method of Fig. i;
Fig. 10B shows the segmented image of Fig. 10A after modification;
Fig. 11A shows an example of a segmented image determined using the method of Fig. i;
Fig. 1 IB shows the segmented image of Fig. 11A after modification;
Fig. 12A shows an example of a segmented image determined using the method of Fig. i;
Fig. 12B shows the segmented image of Fig. 12A after modification;
Fig. 13 is a schematic flow diagram showing a method of assigning an area of the image as a reference image area, as executed in the method of Fig. 2;
Figs. 14A and 14B collectively form a schematic block diagram representation of an electronic device upon which described arrangements can be practised;
DETAILED DESCRIPTION INCLUDING BEST MODE
Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
Figs. 14A and 14B collectively form a schematic block diagram of a general purpose electronic device 1401 including embedded components, upon which methods to be described below are desirably practiced. The electronic device 1401 may be, for example, a mobile phone, a portable media player or a digital camera, in which processing resources are limited. Nevertheless, the methods to be described may also be performed on higher-level devices such as desktop computers, server computers, and other such devices with significantly larger processing resources.
As seen in Fig. 14A, the electronic device 1401 comprises an embedded controller 1402. Accordingly, the electronic device 1401 may be referred to as an “embedded device.” In the present example, the controller 1402 has a processing unit (or processor) 1405 which is bidirectionally coupled to an internal storage module 1409. The storage module 1409 may be formed from non-volatile semiconductor read only memory (ROM) 1460 and semiconductor random access memory (RAM) 1470, as seen in Fig. 14B. The RAM 1470 may be volatile, non-volatile or a combination of volatile and non-volatile memory.
The electronic device 1401 includes a display controller 1407, which is connected to a video display 1414, such as a liquid crystal display (LCD) panel or the like. The display controller 1407 is configured for displaying graphical images on the video display 1414 in accordance with instructions received from the embedded controller 1402, to which the display controller 1407 is connected.
The electronic device 1401 also includes user input devices 1413 which are typically formed by keys, a keypad or like controls. In some implementations, the user input devices 1413 may include a touch sensitive panel physically associated with the display 1414 to collectively form a touch-screen. Such a touch-screen may thus operate as one form of graphical user interface (GUI) as opposed to a prompt or menu driven GUI typically used with keypad-display combinations. Other forms of user input devices may also be used, such as a microphone (not illustrated) for voice commands or a joystick/thumb wheel (not illustrated) for ease of navigation about menus.
As seen in Fig. 14A, the electronic device 1401 also comprises a portable memory interface 1406, which is coupled to the processor 1405 via a connection 1419. The portable memory interface 1406 allows a complementary portable memory device 1425 to be coupled to the electronic device 1401 to act as a source or destination of data or to supplement the internal storage module 1409. Examples of such interfaces permit coupling with portable memory devices such as Universal Serial Bus (USB) memory devices, Secure Digital (SD) cards, Personal Computer Memory Card International Association (PCMIA) cards, optical disks and magnetic disks.
The electronic device 1401 also has a communications interface 1408 to permit coupling of the device 1401 to a computer or communications network 1420 via a connection 1421. The connection 1421 may be wired or wireless. For example, the connection 1421 may be radio frequency or optical. An example of a wired connection includes Ethernet. Further, an example of wireless connection includes Bluetooth™ type local interconnection, Wi-Fi (including protocols based on the standards of the IEEE 802.11 family), Infrared Data Association (IrDa) and the like.
Typically, the electronic device 1401 is configured to perform some special function. The embedded controller 1402, possibly in conjunction with further special function components 1410, is provided to perform that special function. As described here, the components 1410 represent a lens, focus control and image sensor of a digital camera. The special function components 1410 is connected to the embedded controller 1402.
In another arrangement, the device 1401 is in the form of a mobile telephone handset, where the components 1410 include those components required for communications in a cellular telephone environment. In still another arrangement, the device 1401 is a portable device; the special function components 1410 include a number of encoders and decoders of a type including Joint Photographic Experts Group (JPEG), (Moving Picture Experts Group) MPEG, MPEG-1 Audio Layer 3 (MP3), and the like.
The methods described hereinafter may be implemented using the embedded controller 1402, where the processes of Figs. 1 to 13 may be implemented as one or more software application programs 1433 executable within the embedded controller 1402. The electronic device 1401 of Fig. 14A implements the described methods. In particular, with reference to Fig. 14B, the steps of the described methods are effected by instructions in the software 1433 that are carried out within the controller 1402. The software instructions may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
The software 1433 of the embedded controller 1402 is typically stored in the nonvolatile ROM 1460 of the internal storage module 1409. The software 1433 stored in the ROM 1460 can be updated when required from a computer readable medium. The software 1433 can be loaded into and executed by the processor 1405. In some instances, the processor 1405 may execute software instructions that are located in RAM 1470. Software instructions may be loaded into the RAM 1470 by the processor 1405 initiating a copy of one or more code modules from ROM 1460 into RAM 1470. Alternatively, the software instructions of one or more code modules may be pre-installed in a non-volatile region of RAM 1470 by a manufacturer. After one or more code modules have been located in RAM 1470, the processor 1405 may execute software instructions of the one or more code modules.
The application program 1433 is typically pre-installed and stored in the ROM 1460 by a manufacturer, prior to distribution of the electronic device 1401. However, in some instances, the application programs 1433 may be supplied to the user encoded on one or more CD-ROM (not shown) and read via the portable memory interface 1406 of Fig. 14A prior to storage in the internal storage module 1409 or in the portable memory 1425. In another alternative, the software application program 1433 may be read by the processor 1405 from the network 1420, or loaded into the controller 1402 or the portable storage medium 1425 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that participates in providing instructions and/or data to the controller 1402 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, a hard disk drive, a ROM or integrated circuit, USB memory, a magnetooptical disk, flash memory, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the device 1401. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the device 1401 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. A computer readable medium having such software or computer program recorded on it is a computer program product.
The second part of the application programs 1433 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 1414 of Fig. 14A. Through manipulation of the user input device 1413 (e.g., the keypad), a user of the device 1401 and the application programs 1433 may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via loudspeakers (not illustrated) and user voice commands input via the microphone (not illustrated).
Fig. 14B illustrates in detail the embedded controller 1402 having the processor 1405 for executing the application programs 1433 and the internal storage 1409. The internal storage 1409 comprises read only memory (ROM) 1460 and random access memory (RAM) 1470. The processor 1405 is able to execute the application programs 1433 stored in one or both of the connected memories 1460 and 1470. When the electronic device 1401 is initially powered up, a system program resident in the ROM 1460 is executed. The application program 1433 permanently stored in the ROM 1460 is sometimes referred to as “firmware”. Execution of the firmware by the processor 1405 may fulfil various functions, including processor management, memory management, device management, storage management and user interface.
The processor 1405 typically includes a number of functional modules including a control unit (CU) 1451, an arithmetic logic unit (ALU) 1452, a digital signal processor (DSP) 1453 and a local or internal memory comprising a set of registers 1454 which typically contain atomic data elements 1456, 1457, along with internal buffer or cache memory 1455. One or more internal buses 1459 interconnect these functional modules. The processor 1405 typically also has one or more interfaces 1458 for communicating with external devices via system bus 1481, using a connection 1461.
The application program 1433 includes a sequence of instructions 1462 through 1463 that may include conditional branch and loop instructions. The program 1433 may also include data, which is used in execution of the program 1433. This data may be stored as part of the instruction or in a separate location 1464 within the ROM 1460 or RAM 1470.
In general, the processor 1405 is given a set of instructions, which are executed therein. This set of instructions may be organised into blocks, which perform specific tasks or handle specific events that occur in the electronic device 1401. Typically, the application program 1433 waits for events and subsequently executes the block of code associated with that event. Events may be triggered in response to input from a user, via the user input devices 1413 of Fig. 14A, as detected by the processor 1405. Events may also be triggered in response to other sensors and interfaces in the electronic device 1401.
The execution of a set of the instructions may require numeric variables to be read and modified. Such numeric variables are stored in the RAM 1470. The disclosed method uses input variables 1471 that are stored in known locations 1472, 1473 in the memory 1470. The input variables 1471 are processed to produce output variables 1477 that are stored in known locations 1478, 1479 in the memory 1470. Intermediate variables 1474 may be stored in additional memory locations in locations 1475, 1476 of the memory 1470. Alternatively, some intermediate variables may only exist in the registers 1454 of the processor 1405.
The execution of a sequence of instructions is achieved in the processor 1405 by repeated application of a fetch-execute cycle. The control unit 1451 of the processor 1405 maintains a register called the program counter, which contains the address in ROM 1460 or RAM 1470 of the next instruction to be executed. At the start of the fetch execute cycle, the contents of the memory address indexed by the program counter is loaded into the control unit 1451. The instruction thus loaded controls the subsequent operation of the processor 1405, causing for example, data to be loaded from ROM memory 1460 into processor registers 1454, the contents of a register to be arithmetically combined with the contents of another register, the contents of a register to be written to the location stored in another register and so on. At the end of the fetch execute cycle the program counter is updated to point to the next instruction in the system program code. Depending on the instruction just executed this may involve incrementing the address contained in the program counter or loading the program counter with a new address in order to achieve a branch operation.
Each step or sub-process in the processes of the methods described below is associated with one or more segments of the application program 1433, and is performed by repeated execution of a fetch-execute cycle in the processor 1405 or similar programmatic operation of other independent processor blocks in the electronic device 1401.
Fig. 1 is a flow diagram showing a method 100 of modifying an image. The method 100 generates a photorealistic image. The method 100 may be implemented as one or more of the software application programs 1433 resident in the internal storage module 1409 and being controlled in its execution by the processor 1405.
The method 100 begins at capturing step 110, where an image of a real-world high-dynamic range scene 105 is captured by the device 1401, under execution of the processor 1405. In one arrangement, the device 1401 is used to capture three (3) images of the scene using different exposure settings. Capturing the images using different exposure settings may be termed image bracketing. One of the captured images is correctly exposed for the mid-tones, one image is under-exposed and one image is over-exposed. The correctly exposed image has a histogram without blown highlights and without clipped shadows. If more information is necessary in the low tones or high tones, a higher number of under-exposed or a higher number of over-exposed images may be captured at step 110.
Next, at processing step 120, the bracketed images are processed under execution of the processor 1405. In one arrangement, step 120 may be executed by an image processing chip configured within the device 1401. At step 120, the multiple captured images of the scene are combined to generate a new image referred to as an “HDR image” 125. The HDR image 125 contains all the luminance information gathered in the multiple images. The HDR image 125 may be stored internally in the storage module 1409 of the device 1401.
Next, at a tone-mapping step 130, the HDR image 125 is converted into an image 135 that can be displayed on the display 1414 of the device 1401 under execution of the processor 1405. In one arrangement, the tone-mapping step 130 may be performed by the image processing chip as described above. At the tone-mapping step 130, the dynamic range of the HDR image 125 is compressed into a dynamic range to generate the image 135 that can be rendered by the display 1414. The image 135 determined at step 130 may be stored in storage module 1409 by the processor 1405. At step 130, it is possible to display the image 135 on the display of the device 1401. However, the image 135 usually has an un-realistic appearance.
At a next modifying step 140, the image 135 determined at step 130 is modified, under execution of the processor 1405, to generate a modified image 150 with a photorealistic appearance. The modified image 150 may be stored in the storage module 1409 by the processor 1405. In one arrangement, step 140 may be performed by the image processing chip as described above. The modified image 150 is a photo-realistic image that may be displayed on the display 1414 of the device 1401. A method 200 of modifying an image, as executed at step 140, will be described in detail below with reference to Fig. 2. In another arrangement, a method 900 of modifying an image may alternatively be executed at step 140, as will also be described in detail below with reference to Fig. 9.
In an alternative arrangement, the bracketed images captured by the device 1401 at step 110 are transferred to an external computing device 1490 via the network 1420. In such an alternative arrangement, steps 120, 130, and 140 are executed by a software program residing on the external computing device 1490. The modified image 150 may be viewed on a display of the computing device 1490.
The method 100 can be applied in scenarios that modify an image to generate a photorealistic image, independently of whether the image was high-dynamic range or not.
The method 200 of modifying an image to generate the modified image, as executed at step 140, will now be described with reference to Fig. 2. The method 100 may be implemented as one or more of the software application programs 1433 resident in the internal storage module 1409 and being controlled in its execution by the processor 1405. In one arrangement, the method 200 may be performed by the image processing chip configured within the camera 100 to produce the modified image with photorealistic appearance.
The method 200 begins at accessing step 205, where the image 135 to be modified is accessed from the storage module 1409. The image 135 is an image generated after applying tone-mapping to the HDR image 125 at step 130, which results in an image 135 with at least one correctly exposed area. In another arrangement, the image accessed at step 205 is the unconverted HDR image 125 as generated at step 120 with at least one area correctly exposed. In yet another arrangement, no area of the image 135 accessed at step 205 is correctly exposed but an area of the image is specified such that the other regions of the image are to be adjusted according to the specified area.
The image 135 is passed to segmenting step 210, where the image 135 is segmented to form a segmented image 137 comprising two image areas. The two image areas are non-nested. The segmented image 137 may be stored in the storage module 1409 by the processor 1405.
Then at assigning step 215, the two image areas formed at step 210 are assigned as an adjustment image area 220 and a reference image area 230, under execution of the processor 1405. The adjustment image area 220 is the area of the image 135 that is modified to generate a more photorealistic image. The reference image area 230 is the area that is used to determine the target modification of the adjustment image area. Fig. 3A shows an example segmented image 137 as formed at step 215. As shown in Fig. 3A, the image 137 comprises the adjustment image area 220 and reference image area 230 which are each constituted of one image region. A method 1300 assigning an area of the segmented image as a reference image area, as executed at step 215, will be described in detail below with reference to Fig. 13.
In another arrangement, either the adjustment image area 220 or the reference image area 230 is constituted of two or more image regions. In yet another arrangement, the adjustment image area 220 and the reference image area 230 are both constituted of two or more image regions. For example, Fig. 3B shows another segmented image 330 comprising three regions in the form of two image regions 340 and 350 where the regions 340 and 350 constitute together a reference image area. In the example of Fig. 3B, image region 360 constitutes an adjustment image area.
The adjustment image area 220 and reference image area 230 of the segmented image 137 are inputs to step 240. At determining step 240, a ratio value 245 representing a ratio of the size of the adjustment image area 220 and the reference image area 230 is determined under execution of the processor 1405. The determined size ratio value 245 may be stored in the storage module 1409 by the processor 1405. A method 400 of determining a ratio value representing a ratio of the size of the adjustment image area 220 and the reference image area 230, as executed at step 240, will be described in detail below with reference to Fig. 4. The size ratio value 245 may be stored in the storage module 1409 by the processor 1405.
The reference image area 230 of the segmented image 137 is also the input to step 250, where a representative feature value 255 of image data in the reference image area 230 is determined under execution of the processor 1405. A method 500 of determining a representative attribute value, as executed at step 250, will be described below with reference to Fig. 5.
The method 200 continues at determining step 260, where the functional form of a balance model 265 is determined using the representative feature value 255 determined at step 250 and the size ratio value 245 determined at step 240. The balance model 265 is a function of the feature value of image data in the reference image area 230 and of the determined size ratio value 245. The balance model 265 is a perceptual model that represents a relationship as perceived by human observers between the value of a representative feature in the reference image area 230 and value of the representative feature in the adjustment image area 220. The balance model 265 indicates the target value of the representative feature in the adjustment image area 220, based on the value of the representative feature in the reference image area 230. In one arrangement, the balance model 265 is dependent on the scene type of the converted HDR image accessed at step 205. The scene type is determined in determining step 270. In one arrangement, the scene type is determined based on determination of image features. Image features such as luminance or colour histograms are determined. The features are passed on to a classifier, which determines the scene type based on the image features. The classifier may be implemented as one or more software application programs executed in an off-line process (e.g., on the external computing device) so that only the classifier itself (i.e., the mathematical description of the classifier) is stored on the device 1401. Classification methods, based on Support Vector Machine (SVM) or template matching, may be used to determine the classifier using a database of images with annotated scene types.
Once a scene type is identified at step 270, a look-up-table configured within the storage module 1409 is used to determine the corresponding functional form of the balance model for that scene type. The look-up-table lists different scene types and associates each scene type with a pre-determined function form of a balance model, as well as associated parameters.
In another arrangement, the scene type can be manually selected from a table by a user. Fig. 6 shows an example of look-up-table 600 with stored scene types. For example, scene type 1 (ST1) in table 600 identifies the scene type “indoor/outdoof’, in which a part of the image captured at step 110, for example, represents an indoor scene (e.g. living room in a house) and another part of the image captured at step 110 represents an outdoor scene (e.g. the outdoor of a house seen through the windows of the living room). From the table 600, the corresponding function of the balance model for scene type ST1 is FI and parameters associated with the function FI are a, b, and c. For example, a functional form FI of the balance model for the scene type “indoor/outdoor” is indicated by Equation (1), below, in which the mean luminance of the adjustment image area 220 is equal to the sum of two terms. The first term is a function A, which is a function of the mean luminance in the reference image area 230 and the ratio of the size of the reference image area 230 and the adjustment image area 220 (e.g., size ratio value 245). The second term is a function B, which is a function of the size ratio and a parameter c. Equation (2), below, provides a functional form example of Equation (1), with three parameters a, b and c, as indicated by function FI (a,b,c) in Fig. 6. The first term is equal to the ratio of the size of the reference image area 230 and the adjustment image area 220 multiplied by the mean luminance of the reference image area 230 multiplied by a constant (parameter a). The second term is equal to a constant (parameter b) divided by the sum of a constant (parameter c) and the size ratio.
LmeanadjUStment area — A(ratio, Lmeanreferencearea) + B{c, ratio) (1)
Lmeanacijustrnent_area ~ * ratio * Lmeanre^erence area + rat^o+0 05 (2) where a = 1.8, b = 7.3 and c = 0.05.
The output of step 260 is the balance model 265. Next, at modifying step 280, the image data (e.g., pixels) of the adjustment image area 220 are modified according to the balance model 265 to produce the modified image 150. As described above, the balance model 265 is a function of the feature value of image data in the reference image area 230 and of the determined size ratio value 245.
Fig. 8 A shows an example of the segmented image 137 as generated at step 210. In Fig. 8A, the image 137 consists of reference image area 230 and adjustment image area 220. Fig. 8B shows an example of the modified image 150 resulting from modification of the image data (e.g., pixels) of the adjustment image area 220 in the image 137.
The pixel values of the adjustment image area 220 are modified according to step 280, whilst the pixels of the reference image area 230 remain unchanged in the modified image 150. A method 700 of modifying the adjustment area 220, as executed at step 280, will be described in detail below with reference to Fig. 7.
Referring to Fig. 2, the image 135 is passed to segmenting step 210, where the image 135 is segmented to form image 137 that comprises two areas, one reference area 230 and one adjustment area 220. In one arrangement, automatic image segmentation is performed at step 210 by thresholding and clustering. In another arrangement, histogram-based segmentation is used at step 210, where a histogram is determined from all of the pixels in the image 135, and the peaks and valleys in the histogram are used to locate the clusters in the image 135. The determined histogram may be stored in the storage module 1409 by the processor 1405.
Both colour and luminance (intensity) information is used to determine the clusters, which form the different segmented areas of the segmented image 137. The histogram-based method is applied recursively to the obtained clusters to divide the clusters into smaller clusters, until the pixels within a given cluster are considered to be similar according to a distance measure. Conversely, small clusters are fused into bigger clusters if the small clusters are spatially contiguous, and the colour and luminance information of the small clusters is considered to be similar according to the distance measure. In one arrangement, the distance measure is the squared or absolute difference between a pixel and a cluster centre.
According to another arrangement, image segmentation may be provided manually by the user via the display 1414 (e.g., where the display 1414 is a touch screen) of the device 1401. For example, the user draws the precise outlines of the segmented areas using the display 1414 of the device 1401. Alternatively, the segmentation comprises a combination of manual and automatic segmentation, where the user initiates the segmentation process at step 210 by drawing approximate outlines of the segmented areas, and automatic segmentation based on clustering is used to automatically refine the outlines.
Referring to Fig. 2, at step 240, a size ratio value 245 is determined using the adjustment area 220 and the reference area 230. The method 400 of determining a ratio value representing a ratio of the size of the adjustment image area 220 and the reference image area 230, as executed at step 240, will now be described with reference to Fig. 4. The method 400 may be implemented as one or more of the software application programs 1433 resident in the internal storage module 1409 and being controlled in its execution by the processor 1405.
The method 400 begins at determining step 430, where the number of pixels N1 constituting the adjustment image area 220 is determined, under execution of the processor 1405. The determined number of pixels N1 may be stored in the storage module 1409 by the processor 1405. In parallel to step 430, at determining step [4]40, the number of pixels N2 constituting the reference image area 230 is determined.
Then at determining step 450, the ratio N1/N2 is determined under execution of the processor 1405. The determined ratio N1/N2 represents the ratio value 245 and may be stored in the storage module 1409 by the processor 1405. In an alternative arrangement, the ratio of the size of the adjustment image area 220 or the reference image area 230 can be determined relatively to the total number of pixels of the image 135.
In one arrangement, in addition to the ratio of the size of the adjustment image area 220 and reference image area 230, the spatial proximity between pixels in the adjustment image area 220 and the reference image area 230 is used to determine the balance model.
The method 900 of modifying an image to generate a modified image 950, which may be executed at step 140 instead of the method 200, will now be described with reference to Fig. 9. The method 900 may be implemented as one or more of the software application programs 1433 resident in the internal storage module 1409 and being controlled in its execution by the processor 1405. All steps of the method 900 described with reference to Fig. 9 are the same as the steps of the method 200 described in Fig. 2, with exception to an additional step 946 as described below.
At determining step 946, the distance between each pixel in the adjustment image area 220 and a location of the reference image area 230 is determined under execution of the processor 1405. The distance determined at step 946 may be stored in the storage module 1409 by the processor 1405. When the centre of the reference image area 230 falls outside the adjustment image area 220, the Euclidean distance is determined to the centre of the reference image area 230, as indicated by Equation (3) below: dist(pixel, ref jxrea) = J(Xpixei ~ xcentreref area)2 ~ (y pixel ~ ycentreref area)2 (3)
To determine the centre of the reference image area 230, any method that determines the centroid of a region may be used. For example, the x coordinate of the centre of the reference image area 230 may be determined as the mean value of the x coordinate of pixels of the reference image area 230. Likewise, the y coordinate of the centre of the reference image area 230 may be determined as the mean value of the y coordinate of pixels of the reference image area 230.
When the centre of the reference image area 230 falls inside the adjustment image area 220, the distance between each pixel in the adjustment image area 220 and the centre of the reference image area 230 is determined to the closest pixel in the reference image area 230, as indicated by Equation (4): dist(pixel, ref_area) = J(xpixei ~ xciosestre/ area)2 ~ (ypixel ~ yciosestrefarea)2 (4)
The distance determined at step 946 is used with the size ratio value 245 and the representative feature value 255 determined at step 250 to determine the balance model 265. Equation (5) shows a functional form of the balance model.
LmeQ.nadjustment_area A(ratio, dist, LmeaTire^erencearea) T B(c, ratio, dist) (5)
In one arrangement, the strength of the modification of the pixel is weighted by a function of the inverse of the distance to the reference image area 230. As an example, the function is a Gaussian function.
Fig. 10A shows an example of the segmented image 137 generated at step 210 of the method 900. In Fig. 10A, the image 137 consists of reference image area 230 and adjustment image area 220. Distance dl between pixel 1040 in the adjustment image area 220 and a centre 1050 of the reference image area 230 is larger than distance d2 between pixel 1030 in the adjustment image area 220 and centre 1050 of the reference image area 230. The two pixels 1030 and 1040 are modified differently. Fig. 10B shows image 950 after the different modification of pixels 1030 and 1040 in the adjustment image area 220. The pixels of the reference image area 230 remain unchanged.
Referring to Fig. 2, at step 250, the representative feature value 255 of the reference image area 230 is determined. The method 500 of determining a representative feature value, as executed at step 250, will be described below with reference to Fig. 5. The method 500 may be implemented as one or more of the software application programs 1433 resident in the internal storage module 1409 and being controlled in its execution by the processor 1405.
The method 500 begins at determining step 510, where a feature value representing a feature in the reference image area 230 is determined under execution of the processor 1405. The image feature determined at step 510 may be stored in the storage module 1409 by the processor 1405. In one arrangement, the image feature determined at step 510 is luminance information (pixel intensity). In other arrangements, other features, such as chroma, hue, saturation, are determined in the reference image area 230 in step 510. A combination of several image features can also be used at step 510 to determine a feature in the reference image area 230.
After step 510, the method 500 proceeds to determining step 520, where a statistical value, representing a statistic of the determined feature, is determined under execution of the processor 1405. The statistical value determined at step 520 may be stored in the storage module 1409 by the processor 1405. The statistical value determined at step 520 is assigned to be the representative feature value 255 of the representative feature of the reference image area 230. In one arrangement, all pixel image data of the reference image area 230 are used to determine the representative feature value of the reference image area 230.
In another arrangement, only most relevant pixels are selected to determine the representative feature value 255 of the reference image area 230. The most relevant pixels may be determined to be those pixels within a reduced range of the pixel values [min, max]. For example, in one arrangement, only pixels within the [5%, 95%] of a possible range are selected to determine the representative feature value 255 of the reference image area 230. In yet another arrangement, the bounds min and max are dependent on the selected image feature.
In yet another arrangement, pixels with a feature value representing less than x% of the distribution of values in the histogram are discarded from the determination of the statistical value at step 520. For example, pixels with luminance values representing less than 5% of the distribution of values in the histogram may not be used to determine the mean luminance of the reference image area 230.
In one arrangement, the statistical value determined at step 520 is the mean function. In another arrangement, the sum function is determined at step 520. In yet another arrangement, the median function is determined at step 520. In yet another arrangement, the standard deviation is determined at step 520. In yet another arrangement, a combination of the previous functions is determined. In yet another arrangement, other determinations derived from luminance or and/or colour information are determined in step 520. The output of step 250 is a representative feature value 255 of the reference image area 230.
Referring to Fig. 2, at step 280, the pixel image data of the adjustment image area 220 are modified according to the balance model 265. A method 700 of modifying the adjustment area 220, as executed at step 280, will now be described with reference to Fig. 7. The method 700 may be implemented as one or more of the software application programs 1433 resident in the internal storage module 1409 and being controlled in its execution by the processor 1405.
As seen in Fig. 7, balance model 265, as illustrated by Equation (2), indicates a target value 760 of the representative feature in the adjustment image area 220 (e.g., the mean luminance in the adjustment area in Equation (2)), as a function of the representative feature value in the reference image area 230 (e.g. the mean luminance in the reference area in Equation (2)). Each pixel in the adjustment image area 220 is modified to maintain the relationship between the two image areas established by the balance model.
At determining step 730, a feature for pixels in the adjustment image area 220 is determined under execution of the processor 1405. In one arrangement, the feature in the adjustment image area 220 determined at step 730 is the same as the feature in the reference image area 230. In another arrangement, additional to the same feature as in the reference image area 230 being determined, another feature in the adjustment image area 220 is also determined.
At determining step 740, a statistical value representing a statistic of the determined feature is determined under execution of the processor 1405. The statistical value determined at step 740 is assigned to be an initial value 750 of the representative feature of the adjustment image area 220. In one arrangement, the determined statistical value is the mean function. In another arrangement, other statistics such as sum, median, or standard deviation are determined at step 740. The target value 760 and initial value 750 of the representative feature in the adjustment image area 220 are compared in step 770. If the target value 760 and initial value 750 are determined to be the similar, then the method 700 proceeds to step 775.
The image 150 is generated at step 775, where the adjustment image area 220 of the image 135 is not modified, as the initial value 750 satisfies the relationship expressed by the balance model. The image 150 generated at step 775 may be stored in the storage module 1409 by the processor 1405. However, if the target value 760 and initial value 750 are determined to be different at step 770, then the method 700 proceeds to step 780. At step 780, the image 150 is generated by modifying the pixel image data of the adjustment image area 220 of the image 135 to satisfy the relationship expressed by the balance model.
In one arrangement, the pixels in the adjustment image area 220 are equally adjusted to generate the image 150. For example, all pixels in the adjustment image 220 may have their luminance shifted by the amount necessary to obtain a mean luminance which is equal to the target value specified by the balance function. In another arrangement, the pixels in the adjustment image area 220 are not equally shifted but are modified differently while satisfying the target value of the representative feature. For example, “Dark” pixels (i.e., pixels with very low luminance value or luminance below a threshold Tl) and “light” pixels (i.e., pixels with high luminance value or luminance above a threshold T2) are modified more strongly than midtones pixels (i.e., pixels with luminance between threshold T3>T1 and T4<T2).
In one arrangement, the functional form of the balance model 265, as illustrated for example by Equation (2), may be determined using a training procedure on an annotated dataset of images. For example, a set of annotated images may be obtained by providing a set of images to human observers who modify the images to produce more natural-looking images. Image segmentation of the modified images may then be performed to determine a reference image area (e g., 230) and adjustment image area (e g., 220) in each image. In each modified image, a feature (e g., luminance) may be extracted in each of the areas and corresponding feature statistics (e g. mean luminance) may be determined in each area. The extracted features, represented by feature vectors, may then be used as inputs to a training process (e g., using techniques such as regression or machine learning) to determine the functional form and corresponding parameters of the balance model 265.
As described above, at segmenting step 210, the image 135 is segmented to form a segmented image 137 comprising two image areas in the form of the reference image area 230 and the adjustment image area 220. In another arrangement, at segmenting step 210, the image 135 is segmented to form an image 137 comprising further areas to the two areas (i.e., the reference image area 230 and the adjustment image area 220) described above. In such an arrangement, the image 137 comprises more than the two areas (i.e., the reference image area 230 and the adjustment image area 220) described above. One of the further areas is the reference image area 230 and all other ones of the further areas are adjustment image areas. All adjustment image areas are modified relative to the same and unique reference image areas. Fig. 11A shows an example of the segmented image 137 as generated at step 210, in which the image 137 is segmented into a reference image area 1110, and two adjustment image areas 1120 and 1130. Each adjustment image area (e.g., 1120) is modified independently from the other adjustment image area (e.g., 1130).
Referring to the method 200 of Fig. 2, steps 240, 260, 265 and 280 are repeated for each adjustment image area 1120 and 1130 of the image 137. Fig. 11B shows an example of the modified image 150 generated by modifying separately the adjustment areas 1120 and 1130 of the image 137. Both the modification of the adjustment area 1120 and the modification of the adjustment area 1130 use the same reference image area 1110. This reference image area 1110 remains the same after modification of the adjustment areas.
In yet another arrangement, the image 135 is segmented to form an image 137 comprising further areas to the two areas (i.e., the reference image area 230 and the adjustment image area 220) described above. In such an arrangement, the image 137 comprises more than the two areas (i.e., the reference image area 230 and the adjustment image area 220) described above. One of the further areas is the reference image area 230 and all other ones of the further areas are adjustment image areas. In this arrangement, an adjustment image area of the segmented image 137 becomes part of the reference image area of the segmented image after the adjustment image area has been modified. The segmentation of the image 135 starts by considering a pair of image areas where one of the image areas in the pair is the reference image area and the other image area in the pair is the adjustment image area. After the modification of the adjustment image area is completed according to one arrangement, the modified adjustment image area becomes part of the reference image area. The updated reference image area is used for the modification of the next adjustment image area. The step of modifying a next adjustment image area is repeated until all adjustment image areas of the segmented image have been modified. The order of processing of adjustment image areas may be manually indicated by the user, using the display 1414 (e.g., where the display is a touch screen) or the order of processing of adjustment image areas is decided based on proximity to the initial reference image area. Distance between the initial reference image area and each adjustment image area is determined and the adjustment image area closest to the initial reference image area is processed first. For example, the distance between the initial reference image area and each adjustment image area may be determined as the Euclidean distance which is determined as indicated by Equation (6), below: dist(adjust_area,ref _area) = J^centreadjust area ~ xcentreref area)2 ~ centre adjustarea ~ ycentreref area)2 (6)
To determine the centre of the reference image area or the adjustment image area, any method that determines the centroid of a region can be used. For example, Fig. 12A shows an example of the segmented image 137 as generated at step 210, in which the image 137 is segmented into a reference image area 1210, and two adjustment image areas 1220 and 1230. Fig. 12B shows an example of the modified image 150 generated by modifying the pixels in adjustment image area 1230 so that the adjustment image area 1230 becomes part of the reference image area 1210 after modification of the adjustment area 1230.
Referring to Fig. 2, in one arrangement, the reference image area 230 is manually assigned by the user at step 215. The user may assign an area of the segmented image 137 as the reference image area 230, for example, by touching one of the segmented areas of the segmented image 137 via the video display 1414, for example, where the display 1414 is a touch screen of the device 1401.
In another arrangement, an area of the segmented image 137 determined at step 210 is automatically assigned as the reference image area 230 at step 215. The method 1300 of assigning an area of the segmented image 137 as a reference image area, as executed at step 215, will now be described with reference to Fig. 13. The input to the method 1300 is the segmented image 137, composed of segmented Areal 1320 and segmented Area2 1330. The method 1300 may be implemented as one or more of the software application programs 1433 resident in the internal storage module 1409 and being controlled in its execution by the processor 1405.
The method 1300 begins at histogram determining steps 1340 and 1350, where histograms in Areal 1320 and Area2 1330, respectively, are determined under execution of the processor 1405. In one arrangement, luminance information is used to determine the histograms at steps 1340 and 1350. In another arrangement, colour information is used to determine the histograms. In yet another arrangement, both luminance and colour information are used to determine the histograms at steps 1340 and 1350.
Next, at number determining step 1345, a number N1 of clipped pixels in histogram of Areal is determined at step 1345. A number N2 of clipped pixels in histogram of Area2 is determined at step 1355. A pixel is clipped if a value of the pixel is either at minimum value or maximum value of a possible range. In one arrangement, the reference image area is the area with a lowest number of clipped pixels.
At step 1360, N1 is compared to N2 under execution of the processor 1405. If N1<=N2, then the method 1300 proceeds to step 1365. Otherwise, if N1>N2 then the method 1300 proceeds to step 1370.
At assigning step 1365, Areal is assigned to be the reference image area 230 and Area2 is assigned to be the adjustment image area 1390.
At step 1370, Area2 is assigned to be the reference image area 230 and Areal is assigned to be the adjustment image area 220. In yet another arrangement, the reference image area 230 is assigned using scene type information. For example, in the case of a scene type “beach”, in which an area of the segmented image 137 is sky and another area of the segmented image 137 is the beach, the sky is always assigned to be the reference image area 230. An area may be determined to represent ‘sky’ or ‘beach’ based on determination of image features, such as luminance or colour histograms, and using a classifier. The determined features may be passed on to a classifier, which determines the scene type based on the image features. The classifier may be implemented as one of more software application programs executed in an off-line process (e.g., on the external computing device) so that only the classifier itself (i.e. its mathematical description) is stored on the device 1401. Classification methods, based on Support Vector Machine (SVM) or template matching, may be used to determine the classifier using a database of images with annotated area types (e g., sky, grass, beach, and face).
As described above, in the method 100, an HDR image is modified using tone-mapping. As shown in Fig. 1, the modification of the image at step 140 improves the quality of the image captured at step 110 by producing a more photorealistic or natural-looking image. The input image to the method 200 is an image 135 resulting from the HDR image 125 being tone-mapped. A more photorealistic appearance is achieved by using a balance model between a reference image area and an adjustment image area in the HDR image 125.
The described methods overcome the limitations of conventional methods as the described methods use only one input image. The described methods produce photorealistic images, as image modifications are not just applied globally to the input image or applied independently to different areas of the input image but are applied to follow the perceptual response of human observers expressing a desired relationship of characteristics between different areas in an image. As an example, a user may want to capture an image of a scene with very high dynamic range that cannot be captured with a single image because the dynamic range of a scene being captured exceeds the dynamic range of the device 1401. The user uses “HDR mode” of the device 1401 to capture an HDR image of the scene. The processor 1405 may be configured to automatically produce a tone-mapped version of the HDR image. The tone-mapping produces at least one correctly exposed area in the captured image. The tonemapping also preserves shadows and highlights of the real scene. However, the tone-mapping typically produces a rendering of the scene that looks unnatural as a result of exposure fusion or the tone-mapping process. The described methods are therefore advantageously executed by the processor 1405 of the device 1401.
From analysis of the distribution of luminance and colours, the scene type may be determined to be an indoor/outdoor type under execution of the processor 1405 of the device 1401. In the present example, a segmented version of the captured HDR image is used to identify two areas being an indoor area and outdoor area. The user may use the display 1414 (e.g., where the display 1414 is a touch screen) of the device 1401 to select the outdoor area as the reference image area 230. As a consequence, the processor 1405 may automatically assign the indoor area to be the adjustment image area 220, in accordance with the method 1300. The described methods automatically modify the luminance of the indoor area to produce a naturallooking image. The automatic modification of the luminance is obtained by determining the size ratio between the reference image area and the adjustment image area. The mean luminance in each area is determined and the corresponding balance model for ‘indoor/outdoor’ scene type may be provided by an internal look-up-table configured within the storage module 1409. The processor 1405 may be configured to use the selected balance model to modify the luminance level of the pixels in the adjustment image area. For example, the balance model may be configured to identify the luminance of the pixels in the indoor area to be increased from current level LI to level L2 to make the image more natural.
Industrial Applicability
The arrangements described are applicable to the computer and data processing industries and particularly for the image processing.
The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
In the context of this specification, the word “comprising” means “including principally but not necessarily solely” or “having” or “including”, and not “consisting only of’. Variations of the word "comprising", such as “comprise” and “comprises” have correspondingly varied meanings.

Claims (11)

  1. CLAIMS:
    1. A method of modifying an image, said method comprising the steps of: segmenting the image to form an adjustment image area and a reference image area; determining a feature value of image data in the reference image area; determining a ratio of the size of the adjustment image area and the reference image area; modifying image data in the adjustment image area, according to a function of the feature value of image data in the reference image area and the determined size ratio.
  2. 2. The method according to claim 1, wherein said adjustment image area and reference image area are non-nested.
  3. 3. The method according to claim 1, wherein the spatial proximity between pixels image data in the adjustment image area and the reference image area is used to modify the image data.
  4. 4. The method according to claim 1, wherein the image is segmented into further areas.
  5. 5. The method according to claim 4, wherein one of said further areas becomes part of the reference image area after said further area has been modified.
  6. 6. The method according to claim 1, wherein the reference image area is automatically assigned.
  7. 7. The method according to claim 1, further comprising adjusting a combination of said feature value and other feature values in the adjustment image area.
  8. 8. The method according to claim 1, wherein the image data is not equally modified in the adjustment image area.
  9. 9. A system for modifying an image, said system comprising: a memory for storing data and a computer program; a processor coupled to said memory for executing said computer program, said computer program comprising instructions for: segmenting the image to form an adjustment image area and a reference image area; determining a feature value of image data in the reference image area; determining a ratio of the size of the adjustment image area and the reference image area; and modifying image data in the adjustment image area, according to a function of the feature value of image data in the reference image area and the determined size ratio.
  10. 10. An apparatus for modifying an image, said apparatus comprising: means for segmenting the image to form an adjustment image area and a reference image area; means for determining a feature value of image data in the reference image area; means for determining a ratio of the size of the adjustment image area and the reference image area; and means for modifying image data in the adjustment image area, according to a function of the feature value of image data in the reference image area and the determined size ratio.
  11. 11. A computer readable medium having a computer program stored thereon for modifying an image, said program comprising: code for segmenting the image to form an adjustment image area and a reference image area; code for determining a feature value of image data in the reference image area; code for determining a ratio of the size of the adjustment image area and the reference image area; code for modifying image data in the adjustment image area, according to a function of the feature value of image data in the reference image area and the determined size ratio.
AU2014277652A 2014-12-15 2014-12-15 Method of image enhancement based on perception of balance of image features Abandoned AU2014277652A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2014277652A AU2014277652A1 (en) 2014-12-15 2014-12-15 Method of image enhancement based on perception of balance of image features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2014277652A AU2014277652A1 (en) 2014-12-15 2014-12-15 Method of image enhancement based on perception of balance of image features

Publications (1)

Publication Number Publication Date
AU2014277652A1 true AU2014277652A1 (en) 2016-06-30

Family

ID=56404854

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2014277652A Abandoned AU2014277652A1 (en) 2014-12-15 2014-12-15 Method of image enhancement based on perception of balance of image features

Country Status (1)

Country Link
AU (1) AU2014277652A1 (en)

Similar Documents

Publication Publication Date Title
CN111418201B (en) Shooting method and equipment
US11210768B2 (en) Digital image auto exposure adjustment
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2020034737A1 (en) Imaging control method, apparatus, electronic device, and computer-readable storage medium
JP5615973B2 (en) Device operation for capturing high dynamic range images
US9407831B2 (en) Intelligent auto-exposure bracketing
WO2020038072A1 (en) Exposure control method and device, and electronic device
US9584733B2 (en) High dynamic range transition
US10528820B2 (en) Colour look-up table for background segmentation of sport video
US20160071289A1 (en) Image composition device, image composition method, and recording medium
WO2016011889A1 (en) Method and device for overexposed photography
WO2019072190A1 (en) Image processing method, electronic apparatus, and computer readable storage medium
US20180288311A1 (en) Combining images when a face is present
WO2018223394A1 (en) Method and apparatus for photographing image
JP2005295566A (en) Luminance correction
WO2020034701A1 (en) Imaging control method and apparatus, electronic device, and readable storage medium
CN107105172B (en) Focusing method and device
WO2023030139A1 (en) Image fusion method, electronic device, and storage medium
CN110971841A (en) Image processing method, image processing device, storage medium and electronic equipment
CN112258380A (en) Image processing method, device, equipment and storage medium
KR20160063794A (en) Image photographing apparatus and control methods thereof
WO2022151852A1 (en) Image processing method, apparatus, and system, electronic device, and storage medium
CN112950499B (en) Image processing method, device, electronic equipment and storage medium
US11715184B2 (en) Backwards-compatible high dynamic range (HDR) images
US20150312487A1 (en) Image processor and method for controlling the same

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application