AU2015203571A1 - A method and image processing device for automatically adjusting images - Google Patents

A method and image processing device for automatically adjusting images Download PDF

Info

Publication number
AU2015203571A1
AU2015203571A1 AU2015203571A AU2015203571A AU2015203571A1 AU 2015203571 A1 AU2015203571 A1 AU 2015203571A1 AU 2015203571 A AU2015203571 A AU 2015203571A AU 2015203571 A AU2015203571 A AU 2015203571A AU 2015203571 A1 AU2015203571 A1 AU 2015203571A1
Authority
AU
Australia
Prior art keywords
image
preference
visual attention
attention points
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2015203571A
Inventor
Alex Nyit Choy Yee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2015203571A priority Critical patent/AU2015203571A1/en
Publication of AU2015203571A1 publication Critical patent/AU2015203571A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

A METHOD AND IMAGE PROCESSING DEVICE FOR AUTOMATICALLY ADJUSTING Abstract A computer implemented method of automatically adjusting images is provided where the 5 method comprises the steps of: presenting an initial image and at least one modified image to a display; determining a set of visual attention points related to image content in the initial image and the modified image; receiving a preference selection identifying a preference associated with the initial image and the modified image; selecting at least one image processing profile from a plurality of image processing profiles based on the preference 10 selection and the determined set of visual attention points; and adjusting one or more further images based on the selected image processing profile. P142'421-- 1014147R 1 Start Present irlitial image and modified image to the viewer Record visual Capture preference attention points for selection for each each image-pair image-pair Selecting an image processing profile based on the visual attention points and preference selection Adjusting one or more further images based on image processing profile ( End Fig. 6A

Description

1 2015203571 26 Jun2015
A METHOD AND IMAGE PROCESSING DEVICE FOR AUTOMATICALLY ADJUSTING
IMAGES
TECHNICAL FIELD
The present invention relates generally to a method and image processing device for 5 automatically adjusting images. The present invention also relates to a computer implemented method and image processing device for automatically adjusting images based on a selected image processing profile.
BACKGROUND A key problem in the field of image processing is how to predict whether a viewer will prefer 10 one image over another. A key selling point of many imaging devices such as cameras, scanners, printers and displays is the subjective quality of the images they produce. The closer the prediction is to the viewer’s preference, the higher the subjective quality is as perceived by the viewer.
There are methods known in the prior art for predicting viewer preference between two images 15 where one of the images has undergone a transform. One method attempts to use machine learning to predict viewer preference based on features extracted from either the initial image (e.g. an original image), modified image or both. Yet another method predicts the distribution of viewer preferences for either the initial image or the modified image, ranging from strong preference for the initial image to strong preference for the modified image as well as 20 intermediate levels of preferences.
However, all of the above prediction methods aggregate preferences from all viewers during the creation of the preference prediction model. As a consequence, these preference prediction models lack the sensitivity to address different groups of viewers. These preference prediction models perform well for the sub-group of viewers that have the most similar 25 characteristics to the overall viewer group, but perform much worse for the sub-groups of viewers that have different characteristics to the overall viewer group.
There also exists a method that configures a display based on the preferences from an individual viewer. The method presents to the viewer text images which have been rendered based on a balance between resolution, colour accuracy and gamma settings. The viewer 30 selects a preferred text image. The settings associated with the rendering of the preferred text image are stored as the viewer’s profile. The settings are subsequently used to configure the display. Although the above method addresses the viewer’s preference, it has limited P142325:10191978 1 2 2015203571 26 Jun2015 application to natural images. Contents in a natural image are complex, as are the interactions of the image contents with image aesthetics properties such as contrast, luminance, saturation, noise, chroma, sharpness. None of these complexities are addressed in the prior art.
SUMMARY 5 It is an object of the present invention to substantially overcome, or at least ameliorate, one or more disadvantages of existing arrangements
Disclosed are arrangements which seek to address the above problems by associating visual attention points and a preference selection with an image processing profile to automatically adjust images based on that image processing profile. 10 According to a first aspect of the present disclosure, there is provided acomputer implemented method of automatically adjusting one or more images, said method comprising the steps of: presenting an initial image and at least one modified image for display; determining a set of visual attention points related to image content common between the initial image and the modified image; receiving a preference selection identifying a preference associated with the 15 initial image and the modified image; selecting at least one image processing profile from a plurality of image processing profiles based on the preference selection and the determined set of visual attention points; and adjusting one or more further images based on the selected image processing profile. 20 According to a second aspect of the present disclosure, there is provided an image processing device arranged to automatically adjust one or more images, said device further arranged to: present an initial image and at least one modified image for display; determine a set of visual attention points related to image content common between the initial image and the modified image; receive a preference selection identifying a preference associated with the initial image 25 and the modified image; select at least one image processing profile from a plurality of image processing profiles based on the preference selection and the determined set of visual attention points; and adjust one or more further images based on the selected image processing profile. 30 Other aspects are also disclosed.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the invention will now be described with reference to the following drawings, in which: P142325:10191978 1 3 2015203571 26 Jun2015 [0001] Figs. 1A and 1B form a schematic block diagram of a general purpose computer system upon which arrangements described can be practiced; [0002] Figs. 2A and 2B collectively form a schematic block diagram representation of an electronic device, such as an image processing device, upon which described arrangements 5 can be practised;
Fig. 3A is a diagram of a preference histogram of the results of a hypothetical psychophysical experiment;
Fig. 3B is a diagram of a preference distribution of the results of a hypothetical psychophysical experiment; 10 Fig. 4A is a diagram of a preference histogram for a viewer;
Fig. 4B is a diagram of a preference histogram for another viewer;
Fig. 4C is a diagram of a preference histogram that is aggregated from Fig. 4A and Fig. 4B;
Fig. 5 is a diagram representing a pair of images according to an embodiment of the invention;
Fig. 6A is a schematic flow diagram illustrating a method of associating an image processing 15 profile to a viewer according to one embodiment of the invention;
Fig. 6B is a further schematic flow diagram illustrating a method of associating an image processing profile to a viewer according to one embodiment of the invention;
Fig. 7A is a schematic flow diagram illustrating the viewer preference profile creation process 640 in Fig. 6B according to one embodiment of the invention; 20 Fig. 7B is a diagram representing a viewer preference profile created in the method of Fig. 7A;
Fig. 8A is a schematic flow diagram illustrating the viewer preference profile creation process 640 in Fig. 6B according to another embodiment of the invention;
Fig. 8B is a diagram representing a viewer preference profile created in the method of Fig. 8A;
Fig. 9A is a schematic flow diagram illustrating the viewer preference profile creation process 25 640 in Fig. 6B according to the preferred embodiment of the invention; P142325: 10191978 1 4
Fig. 9B is a diagram representing a viewer preference profile created in the method of Fig. 9A; 2015203571 26 Jun2015
Fig. 10 is a diagram representing another viewer preference profile according to an embodiment of the invention;
Fig. 11A is a diagram representing an example viewer preference profile that will be 5 associated with one of the candidate image processing profiles of Fig. 11B, Fig. 11C and Fig. 11D; and
Fig. 11B, Fig. 11C, Fig. 11D are diagrams representing candidate image processing profiles that may be associated with the viewer preference profile of Fig. 11A.
Fig. 12A, Fig. 12B, Fig. 12C are diagrams representing other candidate image processing 10 profiles that may be associated with the viewer preference profile of Fig. 11 A.
DETAILED DESCRIPTION INCLUDING BEST MODE
Various aspects of the present disclosure will now be described with references to Fig. 1a, Fig. 1b, Fig. 2a, Fig. 2b, Fig. 3A, Fig. 3B, Fig. 4A, Fig. 4B, Fig. 4C and Fig. 5.
[0003] Figs. 1A and 1B depict a general-purpose computer system 1300, upon which the 15 various arrangements described can be practiced.
[0004] As seen in Fig. 1A, the computer system 1300 includes: a computer module 1301; input devices such as a keyboard 1302, a mouse pointer device 1303, a scanner 1326, a camera 1327, and a microphone 1380; and output devices including a printer 1315, a display device 1314 and loudspeakers 1317. An external Modulator-Demodulator (Modem) 20 transceiver device 1316 may be used by the computer module 1301 for communicating to and from a communications network 1320 via a connection 1321. The communications network 1320 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN. Where the connection 1321 is a telephone line, the modem 1316 may be a traditional “dial-up” modem. Alternatively, where the 25 connection 1321 is a high capacity (e.g., cable) connection, the modem 1316 may be a broadband modem. A wireless modem may also be used for wireless connection to the communications network 1320.
[0005] The computer module 1301 typically includes at least one processor unit 1305, and a memory unit 1306. For example, the memory unit 1306 may have semiconductor random 30 access memory (RAM) and semiconductor read only memory (ROM). The computer P142325: 10191978 1 5 2015203571 26 Jun2015 module 1301 also includes an number of input/output (I/O) interfaces including: an audio-video interface 1307 that couples to the video display 1314, loudspeakers 1317 and microphone 1380; an I/O interface 1313 that couples to the keyboard 1302, mouse 1303, scanner 1326, camera 1327 and optionally a joystick or other human interface device (not 5 illustrated); and an interface 1308 for the external modem 1316 and printer 1315. In some implementations, the modem 1316 may be incorporated within the computer module 1301, for example within the interface 1308. The computer module 1301 also has a local network interface 1311, which permits coupling of the computer system 1300 via a connection 1323 to a local-area communications network 1322, known as a Local Area Network (LAN). As 10 illustrated in Fig. 1A, the local communications network 1322 may also couple to the wide network 1320 via a connection 1324, which would typically include a so-called “firewall” device or device of similar functionality. The local network interface 1311 may comprise an Ethernet circuit card, a Bluetooth wireless arrangement or an IEEE 802.11 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 1311. 15 [0006] The I/O interfaces 1308 and 1313 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 1309 are provided and typically include a hard disk drive (HDD) 1310. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be 20 used. An optical disk drive 1312 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD, Blu-ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1300.
[0007] The components 1305 to 1313 of the computer module 1301 typically communicate 25 via an interconnected bus 1304 and in a manner that results in a conventional mode of operation of the computer system 1300 known to those in the relevant art. For example, the processor 1305 is coupled to the system bus 1304 using a connection 1318. Likewise, the memory 1306 and optical disk drive 1312 are coupled to the system bus 1304 by connections 1319. Examples of computers on which the described arrangements can be 30 practised include IBM-PC’s and compatibles, Sun Sparcstations, Apple Mac™ or like computer systems.
[0008] The method of automatically adjusting images may be implemented using the computer system 1300 wherein the processes of Figs 6A to 9B, to be described, may be implemented as one or more software application programs 1333 executable within the 35 computer system 1300. In particular, the steps of the method of automatically adjusting P142325:10191978 1 6 2015203571 26 Jun2015 images are effected by instructions 1331 (see Fig. 1B) in the software 1333 that are carried out within the computer system 1300. The software instructions 1331 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules 5 performs the method of automatically adjusting images and a second part and the corresponding code modules manage a user interface between the first part and the user.
[0009] The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 1300 from the computer readable medium, and then executed by the computer system 1300. A io computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 1300 preferably effects an advantageous apparatus for automatically adjusting images.
[0010] The software 1333 is typically stored in the HDD 1310 or the memory 1306. The 15 software is loaded into the computer system 1300 from a computer readable medium, and executed by the computer system 1300 to cause the computer system to operate as an image processor according to the herein described methods. Thus, for example, the software 1333 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 1325 that is read by the optical disk drive 1312. A computer readable medium having such software or 20 computer program recorded on it is a computer program product. The use of the computer program product in the computer system 1300 preferably effects an apparatus for automatically adjusting images.
[0011] In some instances, the application programs 1333 may be supplied to the user encoded on one or more CD-ROMs 1325 and read via the corresponding drive 1312, or
25 alternatively may be read by the user from the networks 1320 or 1322. Still further, the software can also be loaded into the computer system 1300 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 1300 for execution and/or processing. Examples of such storage media include floppy disks, magnetic 30 tape, CD-ROM, DVD, Blu-ray™ Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1301. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the 35 computer module 1301 include radio or infra-red transmission channels as well as a network P142325:10191978 1 7 2015203571 26 Jun2015 connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
[0012] The second part of the application programs 1333 and the corresponding code modules mentioned above may be executed to implement one or more graphical user 5 interfaces (GUIs) to be rendered or otherwise represented upon the display 1314. Through manipulation of typically the keyboard 1302 and the mouse 1303, a user of the computer system 1300 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such 10 as an audio interface utilizing speech prompts output via the loudspeakers 1317 and user voice commands input via the microphone 1380.
[0013] Fig. 1B is a detailed schematic block diagram of the processor 1305 and a “memory” 1334. The memory 1334 represents a logical aggregation of all the memory modules (including the HDD 1309 and semiconductor memory 1306) that can be accessed by 15 the computer module 1301 in Fig. 1A.
[0014] When the computer module 1301 is initially powered up, a power-on self-test (POST) program 1350 executes. The POST program 1350 is typically stored in a ROM 1349 of the semiconductor memory 1306 of Fig. 1A. A hardware device such as the ROM 1349 storing software is sometimes referred to as firmware. The POST program 1350 examines hardware 20 within the computer module 1301 to ensure proper functioning and typically checks the processor 1305, the memory 1334 (1309, 1306), and a basic input-output systems software (BIOS) module 1351, also typically stored in the ROM 1349, for correct operation. Once the POST program 1350 has run successfully, the BIOS 1351 activates the hard disk drive 1310 of Fig. 1A. Activation of the hard disk drive 1310 causes a bootstrap loader program 1352 that is 25 resident on the hard disk drive 1310 to execute via the processor 1305. This loads an operating system 1353 into the RAM memory 1306, upon which the operating system 1353 commences operation. The operating system 1353 is a system level application, executable by the processor 1305, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application 30 interface, and generic user interface.
[0015] The operating system 1353 manages the memory 1334 (1309, 1306) to ensure that each process or application running on the computer module 1301 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 1300 of Fig. 1A must be used properly so P142325:10191978 1 8 2015203571 26 Jun2015 that each process can run effectively. Accordingly, the aggregated memory 1334 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 1300 and how such is used. 5 [0016] As shown in Fig. 1B, the processor 1305 includes a number of functional modules including a control unit 1339, an arithmetic logic unit (ALU) 1340, and a local or internal memory 1348, sometimes called a cache memory. The cache memory 1348 typically includes a number of storage registers 1344 - 1346 in a register section. One or more internal busses 1341 functionally interconnect these functional modules. The processor 1305 typically 10 also has one or more interfaces 1342 for communicating with external devices via the system bus 1304, using a connection 1318. The memory 1334 is coupled to the bus 1304 using a connection 1319.
[0017] The application program 1333 includes a sequence of instructions 1331 that may include conditional branch and loop instructions. The program 1333 may also include 15 data 1332 which is used in execution of the program 1333. The instructions 1331 and the data 1332 are stored in memory locations 1328, 1329, 1330 and 1335, 1336, 1337, respectively. Depending upon the relative size of the instructions 1331 and the memory locations 1328-1330, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 1330. Alternately, an instruction may 20 be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 1328 and 1329.
[0018] In general, the processor 1305 is given a set of instructions which are executed therein. The processor 1305 waits for a subsequent input, to which the processor 1305 reacts to by executing another set of instructions. Each input may be provided from one or more of a 25 number of sources, including data generated by one or more of the input devices 1302, 1303, data received from an external source across one of the networks 1320, 1302, data retrieved from one of the storage devices 1306, 1309 or data retrieved from a storage medium 1325 inserted into the corresponding reader 1312, all depicted in Fig. 1A. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing 30 data or variables to the memory 1334.
[0019] The disclosed arrangements for automatically adjusting images use input variables 1354, which are stored in the memory 1334 in corresponding memory locations 1355, 1356, 1357. The arrangements for automatically adjusting images produce output variables 1361, which are stored in the memory 1334 in corresponding memory P142325:10191978 1 9 2015203571 26 Jun2015 locations 1362, 1363, 1364. Intermediate variables 1358 may be stored in memory locations 1359, 1360, 1366 and 1367.
[0020] Referring to the processor 1305 of Fig. 1B, the registers 1344, 1345, 1346, the arithmetic logic unit (ALU) 1340, and the control unit 1339 work together to perform sequences 5 of micro-operations needed to perform “fetch, decode, and execute” cycles for every instruction in the instruction set making up the program 1333. Each fetch, decode, and execute cycle comprises: [0021] a fetch operation, which fetches or reads an instruction 1331 from a memory location 1328, 1329, 1330; 10 [0022] a decode operation in which the control unit 1339 determines which instruction has been fetched; and [0023] an execute operation in which the control unit 1339 and/or the ALU 1340 execute the instruction.
[0024] Thereafter, a further fetch, decode, and execute cycle for the next instruction may be 15 executed. Similarly, a store cycle may be performed by which the control unit 1339 stores or writes a value to a memory location 1332.
[0025] Each step or sub-process in the processes of Figs. 6A to 9B is associated with one or more segments of the program 1333 and is performed by the register section 1344, 1345, 1347, the ALU 1340, and the control unit 1339 in the processor 1305 working together to 20 perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 1333.
[0026] The method of automatically adjusting images may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions for automatically adjusting images. Such dedicated hardware may include graphic 25 processors, digital signal processors, or one or more microprocessors and associated memories.
[0027] Figs. 2A and 2B collectively form a schematic block diagram of a general purpose electronic device 1401 including embedded components, upon which the methods for automatically adjusting images to be described are desirably practiced. The electronic 30 device 1401 may be, for example, an image processing device such as a mobile phone, a P142325:10191978 1 10 2015203571 26 Jun2015 portable media player or a digital camera, in which processing resources are limited. Nevertheless, the methods to be described may also be performed on higher-level devices such as desktop computers, server computers, and other such devices with significantly larger processing resources. 5 [0028] As seen in Fig. 2A, the electronic device 1401 comprises an embedded controller 1402. Accordingly, the electronic device 1401 may be referred to as an “embedded device.” In the present example, the controller 1402 has a processing unit (or processor) 1405 which is bi-directionally coupled to an internal storage module 1409. The storage module 1409 may be formed from non-volatile semiconductor read only memory (ROM) 1460 and 10 semiconductor random access memory (RAM) 1470, as seen in Fig. 2B. The RAM 1470 may be volatile, non-volatile or a combination of volatile and non-volatile memory.
[0029] The electronic device 1401 includes a display controller 1407, which is connected to a video display 1414, such as a liquid crystal display (LCD) panel or the like. The display controller 1407 is configured for displaying graphical images on the video display 1414 in 15 accordance with instructions received from the embedded controller 1402, to which the display controller 1407 is connected.
[0030] The electronic device 1401 also includes user input devices 1413 which are typically formed by keys, a keypad or like controls. In some implementations, the user input devices 1413 may include a touch sensitive panel physically associated with the display 1414 20 to collectively form a touch-screen. Such a touch-screen may thus operate as one form of graphical user interface (GUI) as opposed to a prompt or menu driven GUI typically used with keypad-display combinations. Other forms of user input devices may also be used, such as a microphone (not illustrated) for voice commands or a joystick/thumb wheel (not illustrated) for ease of navigation about menus. 25 [0031] As seen in Fig. 2A, the electronic device 1401 also comprises a portable memory interface 1406, which is coupled to the processor 1405 via a connection 1419. The portable memory interface 1406 allows a complementary portable memory device 1425 to be coupled to the electronic device 1401 to act as a source or destination of data or to supplement the internal storage module 1409. Examples of such interfaces permit coupling with portable 30 memory devices such as Universal Serial Bus (USB) memory devices, Secure Digital (SD) cards, Personal Computer Memory Card International Association (PCMIA) cards, optical disks and magnetic disks. P142325:10191978 1 11 2015203571 26 Jun2015 [0032] The electronic device 1401 also has a communications interface 1408 to permit coupling of the device 1401 to a computer or communications network 1320 via a connection 1421. The connection 1421 may be wired or wireless. For example, the connection 1421 may be radio frequency or optical. An example of a wired connection 5 includes Ethernet. Further, an example of wireless connection includes Bluetooth™ type local interconnection, Wi-Fi (including protocols based on the standards of the IEEE 802.11 family), Infrared Data Association (IrDa) and the like.
[0033] Typically, the electronic device 1401 is configured to perform some special function. The embedded controller 1402, possibly in conjunction with further special function 10 components 1410, is provided to perform that special function. For example, where the device 1401 is a digital camera, the components 1410 may represent a lens, focus control and image sensor of the camera. The special function components 1410 is connected to the embedded controller 1402. As another example, the device 1401 may be a mobile telephone handset. In this instance, the components 1410 may represent those components required for 15 communications in a cellular telephone environment. Where the device 1401 is a portable device, the special function components 1410 may represent a number of encoders and decoders of a type including Joint Photographic Experts Group (JPEG), (Moving Picture Experts Group) MPEG, MPEG-1 Audio Layer 3 (MP3), and the like.
[0034] The methods described hereinafter may be implemented using the embedded 20 controller 1402, where the processes of Figs. 6A to 9B may be implemented as one or more software application programs 1433 executable within the embedded controller 1402. The electronic device 1401 of Fig. 2A implements the described methods. In particular, with reference to Fig. 2B, the steps of the described methods are effected by instructions in the software 1433 that are carried out within the controller 1402. The software instructions may be 25 formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
[0035] The software 1433 of the embedded controller 1402 is typically stored in the non-30 volatile ROM 1460 of the internal storage module 1409. The software 1433 stored in the ROM 1460 can be updated when required from a computer readable medium. The software 1433 can be loaded into and executed by the processor 1405. In some instances, the processor 1405 may execute software instructions that are located in RAM 1470. Software instructions may be loaded into the RAM 1470 by the processor 1405 initiating a copy of one 35 or more code modules from ROM 1460 into RAM 1470. Alternatively, the software instructions P142325: 10191978 1 12 2015203571 26 Jun2015 of one or more code modules may be pre-installed in a non-volatile region of RAM 1470 by a manufacturer. After one or more code modules have been located in RAM 1470, the processor 1405 may execute software instructions of the one or more code modules.
[0036] The application program 1433 is typically pre-installed and stored in the ROM 1460 by 5 a manufacturer, prior to distribution of the electronic device 1401. However, in some instances, the application programs 1433 may be supplied to the user encoded on one or more CD-ROM (not shown) and read via the portable memory interface 1406 of Fig. 2A prior to storage in the internal storage module 1409 or in the portable memory 1425. In another alternative, the software application program 1433 may be read by the processor 1405 from the network 1320, 10 or loaded into the controller 1402 or the portable storage medium 1425 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that participates in providing instructions and/or data to the controller 1402 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-15 optical disk, flash memory, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the device 1401. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the device 1401 include radio or infra-red transmission channels as well as a network connection to another computer 20 or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. A computer readable medium having such software or computer program recorded on it is a computer program product.
[0037] The second part of the application programs 1433 and the corresponding code modules mentioned above may be executed to implement one or more graphical user 25 interfaces (GUIs) to be rendered or otherwise represented upon the display 1414 of Fig. 2A. Through manipulation of the user input device 1413 (e.g., the keypad), a user of the device 1401 and the application programs 1433 may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be 30 implemented, such as an audio interface utilizing speech prompts output via loudspeakers (not illustrated) and user voice commands input via the microphone (not illustrated).
[0038] Fig. 2B illustrates in detail the embedded controller 1402 having the processor 1405 for executing the application programs 1433 and the internal storage 1409. The internal storage 1409 comprises read only memory (ROM) 1460 and random access memory 35 (RAM) 1470. The processor 1405 is able to execute the application programs 1433 stored in P142325:10191978 1 13 2015203571 26 Jun2015 one or both of the connected memories 1460 and 1470. When the electronic device 1401 is initially powered up, a system program resident in the ROM 1460 is executed. The application program 1433 permanently stored in the ROM 1460 is sometimes referred to as “firmware”. Execution of the firmware by the processor 1405 may fulfil various functions, including 5 processor management, memory management, device management, storage management and user interface.
[0039] The processor 1405 typically includes a number of functional modules including a control unit (CU) 1451, an arithmetic logic unit (ALU) 1452, a digital signal processor (DSP) 1453 and a local or internal memory comprising a set of registers 1454 which typically contain 10 atomic data elements 1456, 1457, along with internal buffer or cache memory 1455. One or more internal buses 1459 interconnect these functional modules. The processor 1405 typically also has one or more interfaces 1458 for communicating with external devices via system bus 1481, using a connection 1461.
[0040] The application program 1433 includes a sequence of instructions 1462 though 1463 15 that may include conditional branch and loop instructions. The program 1433 may also include data, which is used in execution of the program 1433. This data may be stored as part of the instruction or in a separate location 1464 within the ROM 1460 or RAM 1470.
[0041] In general, the processor 1405 is given a set of instructions, which are executed therein. This set of instructions may be organised into blocks, which perform specific tasks or 20 handle specific events that occur in the electronic device 1401. Typically, the application program 1433 waits for events and subsequently executes the block of code associated with that event. Events may be triggered in response to input from a user, via the user input devices 1413 of Fig. 2A, as detected by the processor 1405. Events may also be triggered in response to other sensors and interfaces in the electronic device 1401. 25 [0042] The execution of a set of the instructions may require numeric variables to be read and modified. Such numeric variables are stored in the RAM 1470. The disclosed method uses input variables 1471 that are stored in known locations 1472, 1473 in the memory 1470. The input variables 1471 are processed to produce output variables 1477 that are stored in known locations 1478, 1479 in the memory 1470. Intermediate variables 1474 may be stored 30 in additional memory locations in locations 1475, 1476 of the memory 1470. Alternatively, some intermediate variables may only exist in the registers 1454 of the processor 1405.
[0043] The execution of a sequence of instructions is achieved in the processor 1405 by repeated application of a fetch-execute cycle. The control unit 1451 of the processor 1405 P142325:10191978 1 14 2015203571 26 Jun2015 maintains a register called the program counter, which contains the address in ROM 1460 or RAM 1470 of the next instruction to be executed. At the start of the fetch execute cycle, the contents of the memory address indexed by the program counter is loaded into the control unit 1451. The instruction thus loaded controls the subsequent operation of the 5 processor 1405, causing for example, data to be loaded from ROM memory 1460 into processor registers 1454, the contents of a register to be arithmetically combined with the contents of another register, the contents of a register to be written to the location stored in another register and so on. At the end of the fetch execute cycle the program counter is updated to point to the next instruction in the system program code. Depending on the 10 instruction just executed this may involve incrementing the address contained in the program counter or loading the program counter with a new address in order to achieve a branch operation.
Each step or sub-process in the processes of the methods described below is associated with one or more segments of the application program 1433, and is performed by repeated 15 execution of a fetch-execute cycle in the processor 1405 or similar programmatic operation of other independent processor blocks in the electronic device 1401.
The term ‘preference histogram’ is first explained to show how it relates to a ‘preference distribution’. Diagram 300 in Fig. 3A represents the results of a hypothetical psychophysical experiment, presented in the form of a histogram. In this example, the histogram is 20 represented by three bins, and is also referred to as a preference histogram. In this experiment, a group of viewers were presented with an initial image and a modified image.
It will be understood that an “initial image” is an image that has not had any further modifications performed on it that the modified image has. It will be understood that both the initial image and the modified image may have had the same initial pre-processing applied to 25 them.
The modified image was the result of applying one or more image processing transforms (for example, contrast adjustment, chroma adjustment, brightness adjustment or the like) to the initial image. The viewers selected their preference between the two images by either selecting the image itself or touching or selecting a button associated with the image. The preference 30 histogram bin 310 represents the frequency of viewers who selected a preference for the initial image. The preference histogram bin 330 represents the frequency of viewers who selected a preference for the modified image. The preference histogram bin 320 represents the frequency of viewers who indicated indifference towards the initial and modified images. In other words, the viewers who indicated indifference did not prefer one image over the other. This is also P142325: 10191978 1 15 2015203571 26 Jun2015 referred to as “no preference”. The vertical axis 335 is the frequency with which viewers selected their preferences. Alternatively, the frequency can be normalised to represent percentages or the probability that a viewer in the experimental group would select that preference. 5 The preference histogram is not limited to three bins. Consider the results of another hypothetical psychophysical experiment as shown in diagram 340 of Fig. 3B. This diagram is a preference distribution plot. In this experiment, the viewers were asked to score between “-100” and “+100”, where “-100” indicates a complete preference for the initial image while “+100” indicates a complete preference for the modified image. The result from this experiment is 10 shown as a preference distribution plot 350. The horizontal axis 360 to 370 of the plot is the score value given by viewers, where scores on the horiziontal axis on the left side 360 of the vertical axis indicate preference for the initial image (i.e. scores from “-1” to “-100”) and scores on the horizontal axis on the right side 370 of the vertical axis indicate preference for the modified image (i.e. scores from “+1” to “+100”). Scores in the centre of the horizontal axis 380 15 (i.e. at the intersection with the vertical axis) indicate indifference towards the initial and modified images in terms of preference (i.e. a score of “0”). In other words, the viewers who indicated indifference did not prefer one image over the other. The vertical axis 385 is the frequency with which viewers selected scores at the various preference levels normalised to represent the probability that a viewer in the experimental group would select that score. The 20 shape of the preference distribution 350 is not the only form of preference distribution and is used here for illustration only.
The preference distribution 350 may be discretised into a preference histogram as shown in Fig. 3A. For example, preference scores between “-100” and “-33” can be aggregated into the preference for initial image bin 310, while preference scores between “+33” and “+100” can be 25 aggregated into preference for modified image bin 330. The remaining preference scores (“-32” to “+32”) may be aggregated into the no preference bin 320. The number of histogram bins can also be increased by narrowing the preference score range for each bin.
The following describes the term “preference bias”. A psychophysical experiment was performed by showing multiple pairs of images to a group of viewers. Each pair of images 30 consisted of an initial image and a modified image. The modified image was the result of applying one or more image processing transforms (for example, contrast adjustment, chroma adjustment, brightness adjustment or the like) to the initial image. The experimental results showed that each viewer had substantially different preference histogram characteristics. The difference amongst viewers in the preference histogram characteristics is referred to as 35 “preference bias” or “inter-viewer preference difference”. This bias in the viewer’s preference P142325:10191978 1 16 2015203571 26 Jun2015 can be caused by many factors, such as the viewer’s cultural background and the viewer’s background, interest or specialisation in different aspects of photography. Deficiencies in a viewer’s physical visual system may also contribute to the viewer’s preference bias. For example, a viewer may tend to prefer images that have stronger chroma to compensate for the 5 reduction in cone sensitivities due to age factors.
To illustrate further, histogram 400 in Fig. 4A is a preference histogram for viewer A. According to this example, the preference histogram vertical axis indicates the probability of one of three preference selections. Of all the pairs of images that were shown to viewer A, 20% of the preference selections indicated a preference for the initial image 401, 30% of the preference 10 selections indicated a preference for the modified image 402 and the remaining 50% of the preference selection indicated no preference 403.
Histogram 410 in Fig. 4B is a preference histogram for viewer B. The same pairs of images that were shown to viewer A were also shown to viewer B. For viewer B, 65% of the preference selections indicated a preference for the initial image 411, 20% of the preference 15 selections indicated a preference for the modified image 412 and the remaining 15% of the preference selections indicated no preference 413. Both Fig. 4A and Fig. 4B illustrate an example of preference bias, in that both viewers have substantially different preference histogram characteristics. Accordingly, viewer A has a higher insensitivity to the differences between the modified images and the initial images. However, viewer B mostly preferred the 20 initial images.
As discussed in the background section, prior art methods aggregate preference selections from all viewers during the creation of preference prediction models, and as a consequence, these models lack the sensitivity to address different groups of viewers. We will explain this aspect in more detailed. The histogram 420 in Fig. 4C illustrates a preference histogram by 25 aggregating the preference selections from viewer A and viewer B. That is, the preference histogram for viewer A 400 is combined with the preference histogram for viewer B 410 to create the aggregated preference histogram 420 in Fig. 4C. The preference histogram 420 indicates that, on average, 42.5% of the viewers (viewers A & B) indicated a preference for the initial images 421, followed by 25% for the modified images 422 and the remaining 32.5% 30 were for preference selections that indicated no preference 423. It should be evident that the characteristic of the preference histogram in diagram 420 is substantially different to the individual preference histogram characteristics of each of viewer A and viewer B.
By the same token, prior art methods have failed to recognise the presence of the preference bias by developing preference models based on preference histograms aggregated from all P142325:10191978 1 17 2015203571 26 Jun2015 viewers. It is easy to visualise from the examples provided in Fig. 4A, Fig. 4B and Fig. 4C that there exist viewers that have substantially different preference histogram characteristics. As a consequence, prior art methods generally only perform well for groups of viewers that have the most similar preference histogram characteristics to the aggregated viewer group 5 characteristics. However, the performance of the prior art methods suffers for other groups of viewers that have preference histogram characteristics that are different to the aggregated viewer group characteristics.
Diagram 500 in Fig. 5 shows a representation of a pair of images. The image on the left is an initial image 510, and it contains a human object and a house object. The initial image 510 was 10 captured under bright day light and part of the house is displayed dark due to the limited dynamic range of the camera. The image on the right is the modified image 520 which has the same image content as the initial image. The modified image has undergone a Chroma adjustment process to increase the colour aspect of the face of the human and a contrast adjustment process to increase the house brightness. The initial image 510 and the modified 15 image 520 were presented to two viewers, viewer C and viewer D. Viewer C is a wedding photographer. Viewer D is a photographer whose interest is in high dynamic range (HDR) image capture and processing. Both of the viewers were asked to indicate their preferences for the images. When the viewer C viewed the images, the viewer fixated on and around the human face region 530 and indicated a preference for the modified image 520 over the initial 20 image 510. When viewer D viewed the images, the viewer fixated on the region around the house in the modified image that was darker in the original image 540, and also indicated a preference for the modified image 520 over the initial image 510. While both viewers indicated the same preference for the modified image 520, the image features that influenced their preferences were substantially different. Viewer C preferred the modified image 520 because 25 of the preferred Chroma adjustments around the face region as the face region is important for a wedding portrait. Viewer D preferred the modified image 520 because of the improvements to the image details in the dark region around the house as improved image details are important for HDR images.
Therefore, the preference selection for a natural image involves significantly more complex 30 interactions between the contents of the natural image and the image processing that are applied to the image, and the highly subjective nature of a viewer’s preference, which could be governed by the viewer’s background, interest and specialisation. None of the prior art described above address the preference biases with natural images in the manner described in the above examples. P142325:10191978 1 18 2015203571 26 Jun2015 A method of associating an image processing profile with visual attention points and a preference selection is now described.
Fig. 6A shows a method 600 of associating an image processing profile with a viewer which addresses the preference bias of the viewer. Method 600 starts at step 610 where a pair of 5 images (an initial image and a modified image) is simultaneously or concurrently presented to the viewer for display on a connected display unit. Alternatively, the images may be presented consecutively (one after the other) to the user. The images may be presented to a user on a display on the device that captured the images. Alternatively, the images may be presented to the user on a separate display that is in communication with the device that captured the 10 images. As a further alternative, the display may be separate from the device that captured the images.
The pair of images, also referred to as an image-pair, consists of an initial image and a modified image. The modified image is the result of applying one or more image processing transforms to the initial image. Example image processing transforms include modifications to 15 the luminance, saturation, hue, chroma, contrast, sharpness and noise level of all or part of the initial image. One or more of these may affect the viewer’s preference for the modified image in relation to the initial image. The initial image therefore acts as a reference against the modified image. It will be understood that the initial image may or may not have had image processing applied to it beforehand. According to one example, the initial image may be a raw 20 un-processed image obtained directly from an image capture device. According to another example, the initial image may have had some image processing applied to it prior to its use in this process. It will also be understood that the initial image may be obtained from any suitable memory device or medium, either local to the image capture device, within a local computing device or in a server accessible via any suitable network. 25 Upon presenting the images to the viewer via the display, method 600 proceeds to steps 620 and 630 which may be executed in parallel. Alternatively, it will be understood that step 620 may be executed first and step 630 executed subsequent to step 620. In step 620, the visual attention points associated with the initial image and the modified image are determined or recorded from the moment the pair of images is presented to the viewer. Visual attention 30 points are associated with regions of interest in an image that attract the visual attention of the viewer. That is, certain content, objects, items or components within a defined area/region of the intial or modified image may attract the visual attention of the viewer and the spatial locations (image coordinates) of those particular points are captured. In one example, each visual attention point includes only the spatial location information of the point in the 35 corresponding image presented to the viewer. In another example, each visual attention point P142325:10191978 1 19 2015203571 26 Jun2015 includes the spatial location information and the amount of time that attention was spent at each point in the corresponding image presented to the viewer, as measured by any suitable eye tracking system. Therefore, a set of visual attention points relating to the image content that is common between the initial image and the modified image is determined. One or more 5 spatial locations in the initial image or modified image may be identified based on the determined set of visual attention points, where those spatial locations are associated with regions of interest that contain image content that attract the visual attention of the viewer. Therefore, the process may select an image processing profile based on the preference selection and the identified spatial locations and/or the identified regions of interest. 10 It should be understood that the regions of interest as determined from the visual attention points are different to salient regions in the image. Although salient regions of the image generally contain important objects, these regions do not necessarily attract the most visual attention from the viewer. Besides, the viewer’s attention may be dependent on the viewing task that has been assigned to the viewer. The viewing task in these examples relates to the 15 viewer indicating or selecting a preferred image from a pair of images.
The visual attention points may be captured using any suitable technique or technology. For example, one suitable technique includes the use of an eye-tracker system (head-mounted, table-mounted or embedded in display devices) that provides viewer fixation information, which includes spatial locations on the image and the amount of time spent on each spatial location 20 as an indication of the location and intensity of the viewer’s attention. According to another example, manual inputs may be recorded through the use of a computer mouse. The locations on the image that the viewer clicks using the mouse may indicate the visual attention points. The amount of clicks in a region of the image may indicate the intensity of the viewer’s attention spent in each region. 25 At step 630, a preference selection is received by the system (i.e. captured or obtained from the viewer). For step 630, it will be understood that many different methods may be used to indicate a preference for the images. According to one example, the viewer may be asked to provide a score between “-100” and “+100”, where “-100” indicates a complete preference for the initial image while “+100” indicates a complete preference for the modified image. For 30 example, the user may enter the values into a computing device using a keyboard.
Alternatively, the user may indicate by voice the relevant score and the detected voice may be converted into an associated value. One or more threshold values may be used to convert the scores into a preference histogram as described earlier in Fig. 3B. P142325:10191978 1 20 2015203571 26 Jun2015
According to another example, the viewer may select a displayed option between a preference for the initial image and a preference for the modified image. For example, either image may be clicked on or selected by the user to indicate which of the images are the preferred image. The computing device will then automatically determine that the image not selected is the non-5 preferred image.
According to another example, the viewer may select from a preference for the initial image, a preference for the modified image or an indifference towards the initial image and modified image (i.e. no preference). For example, the user may click on the initial image, click on the modified image or click on an icon indicating they have no preference. Upon detecting the 10 interaction of the user with the images being displayed the computing device may record the preference of the user associated with the pair of images. As soon as the viewer makes a decision on the preference for the images, step 630 captures the preference selection of the viewer. That is, the computing device records the user’s preference (or non-preference) for that pair of images. At the same time, the recording of the visual attention points step 620 15 ends with step 630. It will be understood that it is not necessary for steps 620 and 630 to be carried out in parallel as long as the recorded visual attention points are associated correctly with the viewer’s preference selection for each pair of images.
It will be understood that the number of images presented each time to the viewer for selection of a preference is not limited to a pair of images. According to one example, three or more 20 images may be presented to the viewer in step 610. The images may consist of an initial image, a first modified image and a second modified image. The first and the second modified images may be the result of applying a first image processing transform and a second image processing transform respectively to the initial image. Example image processing transforms may include modifications to the luminance, saturation, hue, chroma, contrast, sharpness, 25 noise level, of all or part of the initial image. All of these may affect the viewer’s preference for one of the first and second modified images in relation to the initial image. Further to this example, the visual attention points corresponding to each of the images presented to the viewer are recorded in step 620. In step 630, the viewer selects a preference for one of the initial image, the first modified image and the second modified image. According to another 30 example, at step 630, the viewer may select a preference for one of the initial image, the first modified image, the second modified image and an indifference towards any of the images (i.e. no preference).
Method 600 then proceeds to step 650, where an image processing profile is selected from amongst a set of pre-determined image processing profiles to be associated with the viewer. 35 The image processing profile is selected based on the set of visual attention points and P142325: 10191978 1 21 2015203571 26 Jun2015 preference selection determined from steps 620 and 630 described above. The image processing profile contains image attributes information to facilitate the association of the image processing profile with the viewer by way of the viewer preference profile. The image processing profile also contains image transformation information that can be used to modify 5 an initial image in a way that will improve the viewer’s preference for the modified image over the initial image. Further details of the image processing profile will be described below in conjunction with Fig. 11 A, Fig. 11B, Fig. 11C and Fig. 11D.
Method 600 then proceeds to step 660 where one or more images are adjusted based on the image processing profile that was selected in step 650. Method 600 then ends. 10 As described above, the method 600 provides an example process where a pair of images is presented to the viewer. In another example, steps 610, 620 and 630 in method 600 may be iterated over two or more pairs of images by presenting multiple pairs of images. The multiple pairs of images may be presented including an initial image and at least two modified images, where each modified image is shown with the initial image. At every iteration, the visual 15 attention points and the preference selection may be stored with reference to the corresponding pair of images. Steps 610, 620 and 630 are completed for all iterations prior to proceeding to step 650. At step 650, an image processing profile is selected from amongst a set of pre-determined image processing profiles to be associated with the viewer. The image processing profile is selected based on the set of visual attention points and the preference 20 selections determined from multiple iterations of steps 610, 620 and 630 described above. Method 600 then proceeds to step 660. At step 660, the process automatically adjusts one or more images based on the image processing profile that was selected in step 650.
Fig. 6B shows a further method 605 of associating an image processing profile with a viewer which addresses the preference bias of the viewer. Steps 610, 620 and 630 are performed in 25 the same way as described above with reference to Fig. 6A.
Method 605 then proceeds to process 640, where a viewer preference profile is created for the viewer based on the visual attention points recorded at step 620 and the viewer’s preference selection captured at step 630. Information about the regions in the images that matter to the viewer in influencing the viewer’s preference selection are stored within the viewer preference 30 profile. Further details of how the viewer preference profile is created are described below.
After process 640, the method 605 proceeds to step 655. where an image processing profile is selected from amongst a set of pre-determined image processing profiles to be associated with the viewer. The image processing profile is selected based on the viewer preference profile P142325: 10191978 1 22 2015203571 26 Jun2015 created at step 640. Again, the image processing profile contains image attributes information to facilitate the association of the image processing profile with the viewer by way of the viewer preference profile. The image processing profile also contains image transformation information that can be used to modify an initial image in a way that will improve the viewer’s 5 preference for the modified image over the initial image.
Method 605 then proceeds to step 660 as described above with reference to Fig. 6A.
Details of a method for creating a viewer preference profile is now provided.
Fig. 7A shows a method 700 that describes an example of the viewer preference profile creation process 640 described above with reference to Fig. 6B. In this example, the inputs to 10 process 640 consist of the viewer’s visual attention points recorded at step 620 and the viewer’s preference selection for a pair of images (initial image and modified image) captured at step 630.
Method 700 begins with step 710 where candidate visual attention points are determined from the set of visual attention points recorded in step 620. Subsequently, an image processing 15 profile may be selected based on the preference selection and the determined set of candidate visual attention points. There are many approaches to selecting the candidate visual attention points.
In a preferred example, a clustering approach may be employed. In a clustering approach, a clustering algorithm such as the K-means algorithm can be utilised to cluster visual attention 20 points that are close to one another, based only on the spatial location information of each point. This clustering process is repeated until no further clusters can be created. Within each cluster, the average of the spatial locations of the visual attention points is computed to represent the cluster’s centre attention point (also known as the centroid). The centroids are selected as candidate visual attention points. 25 In yet another example, the visual attention points that have the amount of attention spent above a pre-determined threshold (measured as a time value) may be selected as candidate visual attention points.
In yet another example, each visual attention point is weighted by the amount of attention (measured in time) spent at the corresponding point. Therefore, a weighting function is applied 30 to the spatial locations based on a time value associated with the visual attention points. K- means algorithm may be utilised to cluster the weighted visual attention points that are close to P142325:10191978 1 23 2015203571 26 Jun2015 one another. This clustering process is repeated until no further clusters can be created. Within each cluster, a weighted average of the spatial locations of the visual attention points is computed to represent the cluster’s centre attention point (also known as the centroid). The centroids are selected as candidate visual attention points. 5 In yet another example, candidate clusters are selected from all the clusters after the completion of the clustering process prior to selecting the candidate visual attention points. In one cluster selection approach, the clusters that have the number of visual attention points above a pre-determined threshold are selected as candidate clusters. The threshold is not fixed and can be tuned based on the requirements of the target applications. In an alternative 10 cluster selection approach, the clusters are initially sorted from the largest to the smallest cluster based on the number of visual attention points in each cluster. The top 20% largest clusters are selected as candidate clusters. Similarly, the threshold of 20% is not fixed, and can be tuned based on the requirements of the target application. For example, the top 30%, 25%, 15%, 10% or 5% of the largest clusters are selected as candidate clusters. Finally, the 15 centre attention point of each of the candidate clusters (also known as the centroid) is selected as a candidate visual attention point.
As mentioned before, regions of interest in an image contain certain contents, objects, items or components that attract the visual attention of the viewer. The clustering algorithm used may identify regions of interest based on the relationship between a number of different visual 20 attention points in the set of visual attention points. That is, by detecting a group or cluster of visual attention points, the system may identify that area as a region of interest. The regions of interest are associated with the candidate visual attention points which may be selected based on the number of visual attention points within one or more clusters, the size of one or more clusters, the location of the centroid of one or more clusters and/or the use of a time value 25 weighing function associated with the visual attention points.
In all of the above approaches, the visual attention points from the initial image may also be pooled with the visual attention points from the modified image prior to determining the candidate visual attention points. This is because, as described above, the images in a pair of images have substantially similar image content, and differ only as a result of the different 30 image processing transforms being applied.
Method 700 proceeds to step 713 where candidate visual attention points are combined with the preference selection captured in step 630 to form a viewer preference profile. The diagram 720 in Fig. 7B illustrates an example of how the viewer preference profile 721 created in step 713 of method 700 can be represented. The viewer preference profile 721 contains an P142325: 10191978 1 24 2015203571 26 Jun2015 identification of the the viewer’s preference selection captured at step 630 and a total of M candidate visual attention points. The profile 721 is represented in a vector form {Candidate visual attention point 1, ... , Candidate visual attention point M, Preference selection}.
Fig. 8A shows a method 800 that describes another example of a viewer preference profile 5 creation process 640. In this example, the inputs to process 640 consist of the viewer’s visual attention points recorded at step 620 and the viewer’s preference selection captured at step 630 for a pair of images (initial image and modified image).
Method 800 begins with step 710 of Fig. 7 where candidate visual attention points are determined from the set of visual attention points recorded in step 620. Various examples of 10 how step 710 may be executed have already been described above.
Method 800 proceeds to step 811 where one or more image features are extracted from each of the candidate visual attention points obtained in step 710 in order to select an image processing profile based on the preference selection and the identified image features. Candidate visual attention points from different spatial image locations may have similar image 15 features, such as, for example, points from a large uniform sky region and points from a similarly textured grass region. By using image features, candidate visual attention points that have similar image features can be combined into a single representation, hence increasing the robustness and efficiency of the viewer preference profile. One or more image features are extracted from a region surrounding the candidate visual attention point. This region is also 20 known as the region of interest, as described above. The size of the region of interest may be dependent on the application and the setup (influenced by the viewing distance of the viewer to the images). In an example, the region of interest may extend to approximately a two degree field of view of the viewer. In another example, the region of interest is obtained through segmenting the region around the candidate visual attention point and combining image 25 segments that share similar image features. In yet another example, the region of interest is the result of combining multiple adjacent regions that share similar image features forming a larger contiguous region of interest.
There are a multitude of image features that may be extracted from each of the candidate visual attention points, ranging from low level to high level image features. Example low level 30 image features include pixel based features such as luminance, colour, gradient, edge, contrast, sharpness, smoothness and Gabor features. Other transform based low level features include singular vector decomposition and frequency decomposition of the region surrounding the candidate visual attention points. Example high level image features include object-based features which are obtained from techniques that discern between mountain-like P142325:10191978 1 25 2015203571 26 Jun2015 regions, human face and body regions, sky, grass and architectural regions. These high level features are commonly derived by computing statistical information from low level image features. In addition, image features can also include statistical differences between the initial image and the modified image, for each region surrounding the candidate visual attention point. 5 Upon completion of the image features extraction process 811, method 800 proceeds to step 813 where the extracted image features are combined with the preference selection captured in step 630 to form a viewer preference profile. The diagram 820 in Fig. 8B illustrates an example of how the viewer preference profile 821 created in step 813 of method 800 may be represented. The viewer preference profile 821 may contain the viewer’s preference selection 10 captured in step 630 and a total of N image features extracted from N candidate visual attention points at step 811. The profile 821 is represented in vector form {Image feature 1, ... , Image feature N, Preference selection}.
In another example, a richer viewer preference profile may contain two or more image features for each candidate visual attention points. It should be noted that the viewer preference profile 15 may be extended in any suitable manner to meet the requirements of the target application.
Fig. 9A shows a method 900 that describes a preferred example of the viewer preference profile creation process 640. In this example, the inputs to the process 640 consist of the viewer’s visual attention points recorded at step 620 and the viewer’s preference selections for multiple pairs of images (two or more pairs) captured at step 630. 20 Method 900 begins with step 710 where candidate visual attention points are determined or selected from the visual attention points captured in step 620 for a pair of images. Method 900 proceeds to step 811 where one or more image features are extracted from the pair of images from each of the candidate visual attention points obtained from step 710. Various embodiments of step 710 and step 811 have been described in method 700 and in method 25 800 respectively, and will not be repeated here.
Method 900 proceeds to the checkpoint 911. Checkpoint 911 confirms whether the candidate visual attention points and the extraction of image features for all pairs of images have been completed. If there are remaining pairs of images that haven’t been processed, method 900 repeats step 710 and step 811 for the next and remaining pairs of images. 30 Otherwise, method 900 proceeds to step 912 where a preference histogram is computed or calculated based on the viewer’s preference selections from step 630. As described in method 600, the viewer may select between a preference for the initial image and a preference for the P142325:10191978 1 26 2015203571 26 Jun2015 modified image in step 630. In this example of method 900, the viewer selects from one of three available preference options for each pair of images: a preference for the initial image, a preference for the modified image or an indifference towards the initial image and modified image (no preference). Hence, the preference histogram computed or calculated in step 912 5 consists of three bins where each bin represents one of the three preference options. The frequency associated with each bin is the corresponding preference selection counts for each of the preference options. Alternatively, the preference histogram may be normalised to represent proportions or probabilities by dividing the preference count in each bin by the total preference selection counts. Examples of preference histograms are illustrated in Fig. 4A, Fig. 10 4B and Fig. 4C, as described above.
Method 900 proceeds to step 913 where the extracted image features performed at step 811 are combined with the preference histogram created at step 912 to form or create a viewer preference profile. The diagram 920 in Fig. 9B illustrates an example of how the viewer preference profile 921 that is created in step 913 of method 900 may be formed or represented. 15 The viewer preference profile 921 contains the preference histogram created at step 912 and a total of O image features extracted from O candidate visual attention points at step 811. The profile 921 is represented in vector form {Image feature 1, ... , Image feature O, Preference histogram}.
Subsequently, the process proceeds to step 655 where the image processing profile is 20 selected, this time based on the identified image features and the determined preference histogram.
The diagram 1000 in Fig. 10 illustrates yet another example of a viewer preference profile 1001 that may be created in step 913. In step 913 of method 900, all the image features extracted from all pairs of images may be further analysed such that only a dominant set of 25 image features are retained. Example dominant image features are i) those features that occur regularly within the image and ii) low dimensional image features that form the subset of all or a set of image features. Example methods of computing the dominant set of image features include a clustering technique and a principle component analysis technique. As a consequence, the viewer preference profile 1001 has a simpler representation and is 30 represented in vector form {Dominant image features, Preference histogram}. Therefore, one or more dominant image features may be determined from the one or more image features that were previously determined, and the image processing profile may be selected based on the preference selection and the determined dominant image feature. P142325:10191978 1 27 2015203571 26 Jun2015
In addition, the viewer preference profiles described in 721, 821, 921 and 1001 can be augmented with information about the locations of the visual attention points, such as sparseness of the points and image coverage. This provides additional information about the visual attention of the viewer. In one example, this may be used to determine whether a viewer 5 focuses on one or more specific image regions, or scans through all image regions in a casual manner. In another example, this information may be used to indicate the confidence of the candidate visual attention points.
As previously described in an embodiment of the invention of method 600, the number of images presented to the viewer is not limited to a pair of images. It should be understood that 10 the methods 700, 800, 900 are easily extended to support examples where three or more images are presented to the viewer in step 610. Consider an example where three images are presented to the viewer in step 610. In this example, step 710 is extended to select candidate visual attention points from the visual attention points recorded when the viewer views all three images in step 620. Subsequently, the preference histogram of step 912 consists of four bins 15 which represent the preference for initial image, the preference for the first modified image, the preference for the second modified image and indifference towards any of the three presented images.
It should be understood from the examples described in Fig. 7A, Fig. 7B, Fig. 8A, Fig. 8B, Fig. 9A, Fig. 9B, Fig. 10, and including methods 700, 800 and 900 that the viewer preference 20 profile may be represented in many ways, so long as the viewer preference profile contains the viewer’s preference selections and information about the regions in the images that matter to the viewer in influencing the viewer’s preference selections. This information is key to addressing the viewer’s preference bias or inter-viewer preference difference.
Details of how an image processing profile is associated with a user is now provided. 25 One or more examples of step 655 in method 605 of Fig. 6B where an image processing profile is associated with the viewer preference profile of the viewer will be described here in conjunction with Fig. 11 A, Fig. 11B, Fig. 11C and Fig. 11D.
An image processing profile contains information to facilitate the association of the image processing profile with the viewer preference profile of the viewer. This association enables 30 image transformation information specified within the image processing profile to be used to transform an input image to a modified image in a way that increases the viewer’s preference for that modified image over the input image. This process addresses the viewer’s preference P142325:10191978 1 28 2015203571 26 Jun2015 bias. Specifically, the image processing profile contains data including at least one image attribute, at least one preference selection and at least one image transformation.
The image attribute and preference selection data in the image processing profile is used to associate the viewer preference profile with the image processing profile. The image attribute 5 data is analogous to image features described in method 800 and method 900. Example low level image attributes include pixel based attributes such as luminance, colour, gradient, edge, contrast, sharpness, smoothness and Gabor attributes. Other transform based low level image attributes include attributes from singular vector decomposition and frequency decomposition of an image region. Example high level image attributes include object-based attributes which 10 are obtained from methods that discern between mountain-like regions, human face and body regions, sky, grass and architectural regions. High level attributes may also be obtained by computing statistical information from low level image attributes.
The preference selection data in the image processing profile is generally represented as a preference histogram obtained from recording interactions of the group of viewers with the 15 computing device.
The image transformation data in the image processing profile consists of one or more image processing transforms that transform an input image to a modified image in a way that increases a viewer’s preference for the modified image over the input image. Example image processing transforms include modifications to the luminance, saturation, hue, chroma, 20 contrast, sharpness, noise level, of all or part of the initial image. The image processing profile may include one or more of image transformation data, one or more image features, and preference selection data or preference histogram data. A method to create the image processing profiles will now be described in detail. According to one example, the image processing profiles are predetermined through modelling the image 25 preference behaviour for a group of viewers, wherein the members of the group consist of viewers that are representative of the viewers in the target application. It is the aim at the modelling stage to ensure that at least one image processing profile is available to represent each of the image attributes that is intended to be supported by the target application. The types of image attribute are predetermined and so are the coverage requirements for each of 30 the image attributes. For example, the coverage for the image attribute ‘brightness of an image region’ may include a ‘bright region’, a ‘moderate brightness region’ and a ‘dark region’. Each of the ‘bright region’, ‘moderate brightness region’ and the ‘dark region’ may be represented by a range of values in the Luminance component (L*) of the image in CIE 1994 L*a*b* colour space. For instance, image region with a mean L* value above 66 may be classified as a P142325:10191978 1 29 2015203571 26 Jun2015 ‘bright region’ while image region with a mean L* value below 34 may be classified as a ‘dark region’. The remaining region may be classified as a ‘moderate brightness region’.
To begin with, image regions are selected from one or more initial images. These image regions, also referred to as focus regions, contain image content that meets the coverage 5 requirements for one or more predetermined image attributes. For example, a focus region which has low luminance values and low Sobel operator responses matches the image attributes “Dark and Smooth Region”. The focus regions may have any arbitrary sizes. In this example, the focus region has a size which extends to approximately two degree field of view of the viewer. 10 Following the selection of the focus regions, image processing transforms are applied to each of the initial images to create the associated modified images. The modified image is the result of applying an image processing transform (for example, contrast adjustment or chroma adjustment) to the initial image. Each viewer is then presented with a pair of images including the initial image and the modified image on the display of the computing device. For each pair 15 of presented images, a focus region is selected and highlighted on the display and a request (for example an audible or visual request) is output by the computer asking the viewer to concentrate on the focus region while selecting from a preference for the initial image, a preference for the modified image, or an indifference towards any of the images. These steps are repeated until preference selections are collected for all focus regions in the initial image 20 and the modified image for all viewers.
The viewer’s preference selections are grouped based on similarity of the focus regions, which are based on the predetermined image attributes. Using the same example above, focus regions that have “Dark and Smooth Region” attribute are grouped together. Within each group of focus regions, the viewers are further grouped based on their preference selections 25 (represented as a preference histogram) as a means to address the viewer’s preference bias. Alternatively, the grouping based on focus regions and viewer’s preference selections can be achieved using clustering techniques such as the K-means clustering algorithm. Finally, for each group of viewers with the matching preference bias in a focus region group, their preference behaviour is modelled to yield an image processing profile for the group. 30 Diagram 1100 in Fig. 11A illustrates an example viewer preference profile 1101 that contains three image features and a preference histogram with three bins, and is also described in vector form as follows: P142325: 10191978 1 30 2015203571 26 Jun2015 {Low Luminance, Moderate Edge Strength, Moderate Frequency Content, Preference for lnitial=0.20, No Preference^. 10, Preference for Modified=0.70}
Diagrams 1110 in Fig. 11B, 1120 in Fig. 11C and 1130 in Fig. 11D illustrate three candidate image processing profiles 1111, 1121, and 1131 that are yet to be associated with the viewer 5 preference profile 1101 in diagram 1100. Each of the three image processing profiles is represented by two image attributes, a preference histogram with three bins and image transform data. The image processing profiles are described in vector form as follows:
Candidate image processing profile 1111 in diagram 1110: {Dark Region, Smooth Region, Preference for lnitial=0.20, No Preference^. 15, Preference for 10 Modified=0.65, Image Transform A}
Candidate image processing profile 1121 in diagram 1120: {Dark Region, Textured Region, Preference for lnitial=0.20, No Preference^. 15, Preference for Modified=0.65, Image Transform B}
Candidate image processing profile 1131 in diagram 1130: 15 {Dark Region, Textured Region, Preference for lnitial=0.58, No Preference^.30, Preference for Modified=0.12, Image Transform C}
In this example, the image features “Low Luminance, Moderate Edge Strength, Moderate Frequency Content” from the viewer preference profile 1101 are best matched to “Dark Region, Textured Region” as contained in the candidate image processing profiles 1121 and 1131, 20 thus removing image processing profile 1111 from a possible match candidate.
Next, the preference histogram “Preference for lnitial=0.20, No Preference^. 10, Preference for Modified=0.70” from the viewer preference profile 1101 is best matched to the preference histogram “Preference for lnitial=0.20, No Preference^. 15, Preference for Modified=0.65” as contained in candidate image processing profile 1121. Hence, image processing profile 1121 is 25 associated with the viewer by way of viewer preference profile 1101. Subsequently, the image transformation information “image transform B” is used to transform input images for use by the viewer.
Diagrams 1200 in Fig. 12A, 1210 in Fig. 12B and 1220 in Fig. 12C illustrate another three candidate image processing profiles 1201, 1211, and 1221 that are yet to be associated with P142325: 10191978 1 31 2015203571 26 Jun2015 the viewer preference profile 1101 in diagram 1100. Each of the three image processing profiles is represented by one image attribute, a preference histogram with three bins and image transform data. The image processing profiles are described in vector form as follows:
Candidate image processing profile 1201 in diagram 1200: 5 {Dark Region, Preference for lnitial=0.20, No Preference^. 15, Preference for Modified=0.65,
Image Transform D}
Candidate image processing profile 1211 in diagram 1210: {Textured Region, Preference for lnitial=0.20, No Preference^. 15, Preference for
Modified=0.65, Image Transform E} 10 Candidate image processing profile 1221 in diagram 1220: {Smooth Region, Preference for lnitial=0.20, No Preference^. 15, Preference for
Modified=0.65, Image Transform F}
In this example, the image features “Low Luminance, Moderate Edge Strength, Moderate Frequency Content” with preference histogram “Preference for lnitial=0.20, No 15 Preference^. 10, Preference for Modified=0.70” from the viewer preference profile 1101 are best matched to candidate image processing profile 1201 and candidate image processing profile 1211. Hence, two separate image processing profiles 1201 and 1211 are associated with the viewer by way of viewer preference profile 1101. Subsequently, the image transformation information “image transform D” and “image transform E” are used to transform 20 input images for use by the viewer.
It should be understood that one or more image processing profiles can be associated with a viewer preference profile as described in the above examples. Details of how the above matches are performed, are provided below.
In a preferred example, the matching of the image processing profile with the viewer 25 preference profile is performed quantitatively in the form of a distance measurement. The image attributes of the image processing profile and the image features of the viewer preference profile are predetermined. The distances between an image attribute to every image feature are also predetermined. A distance of 0 is assigned to an image attribute that completely matches an image feature such as “Low Luminance” and “Dark Region”. A distance P142325:10191978 1 32 2015203571 26 Jun2015 of 1 is assigned to a complete mismatch such as “Low Luminance” and “Smooth Region”. The distance value ranges from 0.0 to 1.0 to represent partial matches. Initially, the distances of an image attribute to each of the image features are computed. The lowest distance value is chosen and associated with the image attribute. The measurement is repeated until a distance 5 value is associated with each image attribute. Then, the distance value for each of the preference histogram bins is measured by computing the difference between the frequencies or proportions of each corresponding bin. The final distance value between an image processing profile and a viewer preference profile is based on the arithmetic mean of all the measured distance values. Alternatively, it will be understood that other suitable distance 10 measurements such as Euclidean distance and Manhattan distance may be used. The image processing profile that has the lowest distance value to the viewer preference profile is associated with that viewer preference profile and so assigned to the associated viewer.
According to one example of an implementation of the herein described process, there are two users (also referred to as viewers in the examples described above). The first user is an 15 elderly person and the second user is a teenager. Both users have large collections of natural images captured over many years using digital cameras from different vendors. The aesthetics of those images vary as a result of on-going improvements to image capture technology over time. Both users wish to create photo albums using a photo album creation software. It is important to the users that images in the photo albums being created turn out in a manner that 20 is preferred by each user.
Upon installation of the photo album creation software, each user is presented with a number of pairs of images during the software calibration stage to create a user preference profile. Within each pair of images, the second image (modified image) is a transformed version of the first image (initial image). The transformation that is applied to the second image is one of 25 Chroma, Luminance and Contrast adjustments. The images are presented on the user’s monitor equipped with an in-built camera capable of tracking the user’s gaze. For each pair of images, the user is required to choose from one of three preference options: “prefer the first image”, “prefer the second image”, or “indifference towards any image”. At the same time, the user’s gaze data is recorded and converted into visual attention points complete with spatial 30 location information and attention information for each point. A user preference profile is created for each user consisting of candidate visual attention points and preference selections for all pairs of images.
Upon further analysis, the elderly user’s candidate visual attention points are generally sparsely located, covering almost all parts of the images with slightly higher attention spent on 35 smooth regions. The elderly user’s preference selections also indicate a higher preference for P142325: 10191978 1 33 2015203571 26 Jun2015 the second images (modified images). On the other hand, the teenager’s candidate visual attention points indicate that the user focused mainly on dark and bright regions. The teenager’s preference selections also indicate a higher preference for the second images (modified images). Although both users have similar higher preference for the second images, 5 their candidate visual attention points are substantially different.
Subsequently, for the elderly user, the photo album software associates a first image processing profile with the elderly user, where the first image processing file contains image transforms to increase the global chroma, luminance and contrast levels. It can be inferred that the elderly user may not be able to see many image details due to old-age vision degeneration. 10 Hence, global changes to image levels are more easily noticeable by the elderly user.
On the contrary, for the teenager user, the photo album software associates a second image processing profile with the teenager user, where the second image processing file contains a combination of image transforms that improve image details in bright and dark image regions, such as a combination of high dynamic range processing, contrast and luminance adjustments. 15 It can be inferred that the teenager is concerned about image details in highlights and shadows within the images.
The candidate images for the photo albums are then processed using the respective image transforms associated with the image processing files (first or second) associated with each of the users. 20 It can be seen according to the various examples described above that a second viewer viewing the initial image and one or more modified images may have the same preference as a first viewer of those images, but the second viewer may have a substantially different determined set of visual attention points than the first viewer and so is associated with an image processing profile that is different to the image processing profile associated with the 25 first viewer. That is, a second set of visual attention points different to a first set of visual attention points may be determined when receiving the same preference selection. Subsequently, a different image processing profile may be selected based on the same preference selection but a different set of visual attention points. P142325:10191978 1

Claims (20)

  1. CLAIMS:
    1. A computer implemented method of automatically adjusting one or more images, said method comprising the steps of: presenting an initial image and at least one modified image for display; determining a set of visual attention points related to image content common between the initial image and the modified image; receiving a preference selection identifying a preference associated with the initial image and the modified image; selecting at least one image processing profile from a plurality of image processing profiles based on the preference selection and the determined set of visual attention points; and adjusting one or more further images based on the selected image processing profile.
  2. 2. The method of claim 1 further comprising the steps of: identifying one or more regions of interest in the initial image or the modified image based on the determined set of visual attention points, where the regions of interest are associated with the image content that attracts visual attention of a viewer of the display; and selecting the image processing profile based on the preference selection and the identified regions of interest.
  3. 3. The method of claim 2, wherein the regions of interest are identified based on a number of visual attention points in the set of visual attention points in a defined area of the initial image or the modified image.
  4. 4. The method of claim 2, wherein the regions of interest are identified based on a relationship between visual attention points in the set of visual attention points.
  5. 5. The method of claim 2, wherein the regions of interest are identified based on a time value associated with visual attention points in the set of visual attention points.
  6. 6. The method of claim 1 further comprising the steps of: determining a set of candidate visual attention points from the set of visual attention points; and selecting the image processing profile based on the preference selection and the determined set of candidate visual attention points.
  7. 7. The method of claim 6 further comprising the step of determining the set of candidate visual attention points based on a threshold time value associated with the visual attention points.
  8. 8. The method of claim 6 further comprising the step of determining the set of candidate visual attention points based on spatial locations in the initial image or the modified image associated with the determined set of visual attention points.
  9. 9. The method of claim 8 further comprising the step of applying a weighting function to the spatial locations based on a time value associated with the visual attention points.
  10. 10. The method of claim 6 further comprising the step of determining the set of candidate visual attention points by determining clusters of visual attention points within the determined set of visual attention points and selecting the set of candidate visual attention points based on the determined clusters.
  11. 11. The method of claim 10 wherein the set of candidate visual attention points are selected based on one or more of the number of visual attention points within the clusters, the size of the clusters, and the centroid of the clusters.
  12. 12. The method of claim 6 further comprising the steps of: determining one or more image features based on the candidate visual attention points; and selecting the image processing profile based on the preference selection and the identified image features.
  13. 13. The method of claim 12 further comprising the steps of: determining one or more dominant image features from the determined one or more image features, and selecting the image processing profile based on the preference selection and the determined dominant image features.
  14. 14. The method of claim 1 further comprising the steps of: presenting a plurality of pairs of images comprising the initial image with each of at least two modified images; and receiving preference selections associated with the presented pairs of images.
  15. 15. The method of claim 14 further comprising the steps of: determining a preference histogram from the preference selections; determining a set of candidate visual attention points from the set of visual attention points; determining one or more image features based on the candidate visual attention points; and selecting the image processing profile based on the determined image features and the determined preference histogram.
  16. 16. The method of claim 1 further comprising the steps of: creating a viewer preference profile based on the preference selection and the determined set of visual attention points, and selecting the at least one image processing profile from the plurality of image processing profiles based on the viewer preference profile.
  17. 17. The method of claim 1, wherein the presenting step comprises presenting the initial image and at least two modified images to the display.
  18. 18. The method of claim 1, wherein the image processing profile comprises image transformation data, one or more image features, and preference selection data or preference histogram data.
  19. 19. The method of claim 1 further comprising the steps of determining a second set of visual attention points different to the set of visual attention points, receiving the preference selection, and selecting a different image processing profile to the at least one image processing profile based on the preference selection and the determined set of visual attention points.
  20. 20. An image processing device arranged to automatically adjust one or more images, said device further arranged to: present an initial image and at least one modified image for display; determine a set of visual attention points related to image content common between the initial image and the modified image; receive a preference selection identifying a preference associated with the initial image and the modified image; select at least one image processing profile from a plurality of image processing profiles based on the preference selection and the determined set of visual attention points; and adjust one or more further images based on the selected image processing profile.
AU2015203571A 2015-06-26 2015-06-26 A method and image processing device for automatically adjusting images Abandoned AU2015203571A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2015203571A AU2015203571A1 (en) 2015-06-26 2015-06-26 A method and image processing device for automatically adjusting images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2015203571A AU2015203571A1 (en) 2015-06-26 2015-06-26 A method and image processing device for automatically adjusting images

Publications (1)

Publication Number Publication Date
AU2015203571A1 true AU2015203571A1 (en) 2017-01-19

Family

ID=57759408

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2015203571A Abandoned AU2015203571A1 (en) 2015-06-26 2015-06-26 A method and image processing device for automatically adjusting images

Country Status (1)

Country Link
AU (1) AU2015203571A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309578A (en) * 2023-05-19 2023-06-23 山东硅科新材料有限公司 Plastic wear resistance image auxiliary detection method using silane coupling agent

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309578A (en) * 2023-05-19 2023-06-23 山东硅科新材料有限公司 Plastic wear resistance image auxiliary detection method using silane coupling agent
CN116309578B (en) * 2023-05-19 2023-08-04 山东硅科新材料有限公司 Plastic wear resistance image auxiliary detection method using silane coupling agent

Similar Documents

Publication Publication Date Title
EP3579544B1 (en) Electronic device for providing quality-customized image and method of controlling the same
US8515137B2 (en) Generating a combined image from multiple images
AU2008264197B2 (en) Image selection method
US9092700B2 (en) Method, system and apparatus for determining a subject and a distractor in an image
US9799099B2 (en) Systems and methods for automatic image editing
US9292911B2 (en) Automatic image adjustment parameter correction
US9070044B2 (en) Image adjustment
KR101725884B1 (en) Automatic processing of images
WO2018072271A1 (en) Image display optimization method and device
US20150062381A1 (en) Method for synthesizing images and electronic device thereof
US9058655B2 (en) Region of interest based image registration
US11244652B2 (en) Display apparatus and control method thereof
US20130279811A1 (en) Method and system for automatically selecting representative thumbnail of photo folder
US20150189384A1 (en) Presenting information based on a video
CN106815803B (en) Picture processing method and device
WO2014074959A1 (en) Real-time face detection using pixel pairs
GB2587833A (en) Image modification styles learned from a limited set of modified images
US20090160945A1 (en) Systems and Methods for Enhancing Image Quality of a Web Camera Image
KR20200052598A (en) Method and apparatus for correcting preview image
US11941774B2 (en) Machine learning artificial intelligence system for producing 360 virtual representation of an object
AU2015203571A1 (en) A method and image processing device for automatically adjusting images
US9706112B2 (en) Image tuning in photographic system
WO2015100070A1 (en) Presenting information based on a video
Folz et al. Aesthetic photo enhancement using machine learning and case-based reasoning
AU2014277652A1 (en) Method of image enhancement based on perception of balance of image features

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application