GB2518144A - An image enhancement apparatus and method - Google Patents

An image enhancement apparatus and method Download PDF

Info

Publication number
GB2518144A
GB2518144A GB1315502.3A GB201315502A GB2518144A GB 2518144 A GB2518144 A GB 2518144A GB 201315502 A GB201315502 A GB 201315502A GB 2518144 A GB2518144 A GB 2518144A
Authority
GB
United Kingdom
Prior art keywords
parameter
motion
region
playback signal
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1315502.3A
Other versions
GB201315502D0 (en
Inventor
Mikko Tammi
Arto Juhani Lehtiniemi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to GB1315502.3A priority Critical patent/GB2518144A/en
Publication of GB201315502D0 publication Critical patent/GB201315502D0/en
Priority to PCT/FI2014/050650 priority patent/WO2015028713A1/en
Publication of GB2518144A publication Critical patent/GB2518144A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00132Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture in a digital photofinishing system, i.e. a system where digital photographic images undergo typical photofinishing processing, e.g. printing ordering
    • H04N1/00167Processing or editing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Typically, cinemagraphs are still photographs in which a minor or repeated movement occurs. The invention relates to providing audio processing functionality for cinemagraphs or visual animations. The invention comprises means for carrying out the steps of: analysing at least two images to determine at least one region common to the at least two images; determining at least one parameter associated with a motion of at least one region; determining at least one playback signal, such as an audio file, to be associated with the at least one region; and processing the at least one playback signal based on the at least one parameter. The invention allows a user easily and without significant skill to generate a cinemagraph with audio or tactile effects. Thus a user may select a region of interest 203, select an audio file 205 and select a frame for synchronizing with beat 209. The effects may be generated and embedded as metadata including audio effects signals or links to such signals.

Description

AN IMAGE ENHANCEMENT APPARATUS AND METHOD
Field
The present invention relates to providing addftional functionality for images. The invenfion further relates to but is not Umited to, display apparatus providing addffional functionaUty for images displayed in mobile devices. More particularly, the invention relates to providing audio processing functionaflty for visual anmaflons, and further relates to, but is not limited to, display apparatus providing audio enabled visual data for animating and displaying in mobHe devices.
Background
Many portable devices, for example mobile telephones, are equipped with a display such as a glass or plastic display window for providing information to the user.
Furthermore such display windows are now commonly used as touch sensitive inputs. In some further devices the device is equipped with transducers suitable for generating audible feedback.
Images and animated images are known. Animated images or cinemagraph images can provide the illusion that the viewer is watching a video. The cinemagraph are typicafly still photographs in which a minor and repeated movement occurs. These are particularly useful as they can he transferred or transmitted between devices using significantly smaller bandwidth than conventional video.
Statement
According to an aspect, there is provided a method comprising: analysing at least two images to determine at east one region common to the at least two images; determining at least one parameter associated with a motion of at least one region; determining at least one playback signal to be associated with the at least one region; and processing the at east one playback signal based on the at least one parameter.
Determining at least one parameter assodated with a motbn of at east one region may comprise: determining a motion of the at east one region; and determining at east one parameter based on the motion of the at least one region.
The at east one parameter may comprise at east one of: a motion periodicity; a motion direction; a motion speed; and a motion type.
Determining at least one playback signal to be associated with the at least one region may comprise determining at least one playback signal based on the at least one parameter.
Determining at east one playback signal based on the at least one parameter may comprise: determining at least two playback signals based on the at least one parameter; receiving an input to select one of the at east two playback signals; and selecting one of the at least two playback signals based on the input.
Determining at least one playback signal based on the at least one parameter may comprise: determining for at least one playback signal at least one motion parameter value; and determining the at least one motion parameter value is within a determined distance of the at least one parameter.
Processing the at least one playback signal based on the at least one parameter may comprise at least one of: spatial processing the at least one playback signal based on the at least one parameter; combining the at least one playback signal to a recorded at least one audio signal based on the at least one parameter; and signal processing the at least one playback signal based on the at least one parameter.
Spatial processing the at least one playback signal based on the at toast one parameter may comprise modifying the audio field of the at east one playback signal to move based on the motion of the at east one region.
The method may further comprise: displaying at least one image of the at least two images; and synchronising and outputting the processed at least one playback signal.
The at least one playback signal may comprise at least one of: at least one audio signal; and at least one tactile signal.
Processing the at least one playback signal based on the at least one parameter may comprise at east one of: detemiining within the playback signal at east one audio object; and spatially processing the at least one audio object based on the at east one parameter such that the at least one audio object follows the motion of the at least one regbn.
According to a second aspect there is provided an apparatus comprising: means for analysing at least two images to determine at least one region common to the at least two images; means for determining at least one parameter associated with a motion of at least one region; means for determining at least one playback signal to be associated with the at least one region; and means for processing the at least one playback signal based on the at least one parameter.
The means for determining at least one parameter associated with a motion of at least one region may comprise: means for determining a motion of the at least one region; and means for determining at least one parameter based on the motion of the at least one region.
The at least one parameter may comprise at least one of: a motion periodicity; a motion direction; a motion speed; and a motion type.
The means for determining at least one playback signal to be associated with the at least one region may comprise means hr determining at least one playback signal based on the at least one parameter.
The means for determining at least one playback signal based on the at least one parameter may comprise: means for determining at least two playback signals based on the at least one parameter means for receMng an input to select one of the at least two playback signals; and means for selecting one of the at least two playback signals based on the Input.
The means for determining at least one playback signal based on the at least one parameter may comprise: means for determining for at least one playback signal at least one motion parameter value; and means for determining the at least one motion parameter value is within a determined distance of the at least one parameter.
The means for processing the at least one playback signal based on the at least one parameter may comprise at least one of: means for spatial processing the at least one playback signal based on the at least one parameter; means for combining the at least one playback signal to a recorded at least one audio signal based on the at least one parameter; and means for signal processing the at least one playback signal based on the at least one parameter.
The means hr spatial processing the at least one playback signal based on the at least one parameter may comprise means for modifying the audio field of the at least one playback signal to move based on the motion of the at least one region.
The apparatus may further comprise: means for displaying at least one Image of the at least two images; and means for synchronlsing, and outputting the processed at least one playback signal.
The at least one playback signal may comprise at least one of: at least one audio signal; and at least one tactile signal.
The means for processing the at east one playback signal based on the at east one parameter comprises at east one of: means for determining wahin the playback signal at east one audio object; and means for spatiaUy processing the at least one audio object based on the at least one parameter such that the at east one audio object foflows the motion of the at east one region.
According to a third aspect there is provided an apparatus comprising at east one processor and at least one memory including computer code for one or more programs, the at east one memory and the computer code configured to with the at least one processor cause the apparatus to at least: analyse at east two images to determine at east one region common to the at east two images; determine at least one parameter assodated with a motion of at least one region; determine at least one playback signal to be associated with the at east one region; and process the at least one playback signal based on the at least one parameter.
Determining at east one parameter associated with a motion of at least one region may cause the apparatus to: determine a motion of the at least one region; and determine at least one parameter based on the motion of the at least one region.
The at east one parameter may comprise at east one of: a motion periodcity; a meLlon direction; a motion speed; and a motion type.
Determining at least one playback signal to be associated with the at least one region may cause the apparatus to determine at least one playback signal based on the at least one parameter.
Determining at east one playback signal based on the at east one parameter may cause the apparatus to: determine at least two playback signals based on the at least one parameter; receive an input to select one of the at least two playback signals; and select one of the at east two playback signals based on the input.
Determining at east one playback signal based on the at east one parameter may cause the apparatus to: determine for at east one playback signal at east one motion parameter value; and determine the at least one motion parameter value is within a determined distance of the at least one parameter.
Processing the at east one playback signal based on the at least one parameter may cause the apparatus to perform at least one of: spatial processing the at least one playback signal based on the at least one parameter; combining the at least one playback signal to a recorded at least one audio signal based on the at least one parameter; and signal processing the at least one playback signal based on the at least one parameter.
Spatial processing the at least one playback signal based on the at least one parameter may cause the apparatus to modify the audio field of the at least one playback signal to move based on the motion of the at least one region.
The apparatus may further he caused to: display at least one image of the at least two images; and synchronise and output the processed at least one playback signal.
The at least one playback signal may comprise at least one of: at least one audio signal; and at least one tactile signal.
Processing the at least one playback signal based on the at least one parameter may cause the apparatus to perform at least one of: determine within the playback signal at least one audio object; and spatially process the at least one audio object based on the at least one parameter such that the at least one audio object follows the motion a the at least one region.
According to a fourth aspect there is provided an apparatus comprising: an analyser configured to analyse at least two images to determine at least one region common to the at east two images; a motion determiner configured to detemiine at least one parameter associated with a motion of at least one region; a playback determiner configured to determine at least one playback signal to be associated with the at east one region; and a processor configured to process the at least one playback signal based on the at least one parameter.
The motion determiner may be configured to: determine a motion of the at east one region; and determine at east one parameter based on the motion of the at east one region.
The at least one parameter may comprise at least one of: a motion periodicity: a motion dfrection; a motion speed; and a moflon type.
The playback determiner may he configured to determine at least one playback signal based on the at least one parameter.
The playback detemiiner may be configured to: determine at least two playback signals based on the at least one parameter: receive an input to select one of the at least two playback signals; and select one of the at least two playback signals based on the input.
The playback determiner may be configured to: determine for at least one playback signal at east one motion parameter value; and determine the at least one motion parameter value is within a determined distance of the at least one parameter.
The processor may comprise at east one of: a spatial processor configured to spatial process the at least one playback signal based on the at least one parameter; a combiner configured to combine the at least one playback signal to a recorded at least one audio signal based on the at least one parameter; and a signal processor configured to signal process the at east one playback signal based on the at least one parameter.
The spatial processor may be configured to modify the audio field of the at least one playback signal to move based on the motion of the at least one region.
The apparatus may further comprise: a display configured to display at east one image of the at least two images; and a synchroniser configured to synchronise and output the processed at teast one playback signaL The at east one payback signal may comprise at east one of: at least one audio signal; and at least one tactUe signaL The processor may comprise at east one of: an audio object determiner configured to determine within the playback signal at least one audio object; and a spatial processor configured to spatiafly process the at east one audio object based on the at east one parameter such that the at least one audio object foows the moUon of the at east one region.
A computer program product stored on a medium may cause an apparatus to perform the method as described herein.
An &ectronic device may comprise apparatus as described herein.
A chipset may comprise apparatus as described herein.
ofFiLres For better understanding of the present invention, reference wifl now be made by way of example to the accompanying drawings in which: Figure 1 shows schematicaUy an apparatus suitable for employing some embodiments: Figure 2 shows schematicaHy an example audio enhanced cinemagraph generator; Figure 3 shows a flow diagram of the operation of the audio enhanced cinemagraph generator as shown in Figure 2 according to some embodiments; Figure 4 shows schematically a video analyser as shown in Figure 2 according to some embodiments; Hgure 5 shows a flow diagram of the operation oF the video analyser as shown in Figure 4 according to some embodiments; Figure 6 shows schematically an audiohaptic processor as shown in Figure 2 according to some embodiments; Figure 7 shows a flow diagram of the operation of the audkhaptic processor as shown in Figure 6 according to some embodiments.
Description of Example Embodiments
The concept of embodiments of the application is to combine audio signals and/or haptic signals to dnemagraphs (animated images) during the generation of cinernagraphs or animated images. This can be implemented in the example shown herein by generating and embedding metadata induding audio effect signals or inks to the audio effect signal (or haptic effect signals or links to the haptic effect signals) using at least one of intrinsic and synthetic audio (haptic) signals in such a manner that the generated cinemagraph is enhanced by the audio and/or haptic effect.
High quality photographs and videos are known to provide a great way to relive an experience. Cinemagraphs or animated images are seen as an extension of a photograph and produced using postproduction techniques. The cinemagraph provides a means to enable motion in an object common or mutual between images or in a region of an otherwise stiU or static picture. For example the design or aesthetic element aUows subtle motion elements while the rest of the image is still. In some cinemagraphs the motion or animation feature is repeated.
In the following description and claims the term object, common object, or region can be considered to refer to any element, object or component which is shared (or mutual) across the images used to create the cinernagraph or animated object. For example the images used as an input could he a video of a moving toy train against a substantially static background. In such an example the object, subject, common object, region, or element can be the toy train which in the animated image provides the dynamic or subtle motion element whilst the rest of the Image Is still. It would be understood that the common object or subject may not be substantially identical from frame to frame. However typically there is a large degree of conlation between subsequent image objects as the object moves or appears to move. For example the object or subject of the toy train can appear to move to and from the observer from frame to frame in such a way that the train appears to get lamer/smaller or the toy train appears to turn away from or to the observer by the toy train profile changing.
In other words the size, shape and position of the region of the image identified as the subject, object or element can change from Image to image, however within the image there is a selected entity which from frame to frame has a degree of correlation (as compared to the static image components which have substantially perfect correlation from frame to frame).
A cinemagraph can in some ways be seen as a potential natural progression of image viewing from greyscale (black and white photography) to colour, colour to high-resolution colour images, fully static to regional motion within the photograph.
However reliving an experience can be seen as being incomplete without audio, and cinemagraphs at present cannot render audio and/or tactile effects with the images.
The problem therefore is how is to enable an apparatus to easily and without significant skilled and experienced input from the user to generate a cinemagraph or animated Image such that the audio and/or tactile effect can be associated with It.
Typically a cinemagraph (or motion photograph or animated image) is constructed from a video sequence, in which audio is likely to be available or associated wIth it.
However when attempting to try to tie the audio in the motion photograph, the recorded audio in the scene as a whole cannot be tied to the motion image, rather the attached audio should be selected and processed selectively.
ft would be understood that a c;inemagraph can normafly he understood to have a repeatable, subtle, motion element (or subject or object). However in some situations the audio can be attached to nonrepeatable object or motion &ement within an animated image or photograph. For example adthng a ghtning/thunder sound to a motion photograph. Similarly in some embodiments the audio dip or signal can be a single instance play element within a visual motion element animated scene.
The concept as described in embodiments herein is to analyse the movement occurring in the images which are used to generate the cinemagraph and based on the characteristics or parameters determined additional features, such as audio or tactile effect selection or processing can be determined. in some embodiments the additional features use can be made very simple for the user (for example providing a user interface switch to turn the feature onioff) and different options can he easily added to enhance the experience.
For example as described in further detail in embodiments herein additional features to enhance the Cinemagraph experience can be at least one of: spatial sound scene processing; haptic (vibra) effect addition; music object modification; movement matching to an audio scene from stored or retrieved audio database (for example a movie catalogue), With respect to Figure 1 a schematic block diagram of an example electronic device 10 or apparatus on which embodiments of the application can be implemented. The apparatus 10 is such embodiments configured to provide improved image experiences.
The apparatus 10 is in some embodiments a mobile terminal, mobile phone or user equipment for operation in a wireless communication system. in other embodiments, the apparatus is any suitable electronic device configured to process video and audio data. in some embodiments the apparatus is configured to provide an image display, such as for example a digital camera, a portable audio player (mp3 player), a portable video player (mp4 player). In other embodiments the apparatus can be any suitable electronic device with touch interface (which may or may not display information) such as a touch-screen or touch-pad configured to provide feedback when the touch-screen or touch-pad is touched. For example in some embodiments the touch-pad can be a touch-sensitive keypad which can in some embodiments have no markings on it and in other embodiments have physical markings or designations on the front window The user can in such embodiments be notified of where to touch by a physical identifier such as a raised proffle, or a printed layer which can be iUuminated by a light guide.
The apparatus 10 comprises a touch input module or user interface 11, which is linked to a processor 15. The processor 15 is further inked to a display 12. The processor 15 is further linked to a transceiver (TX/RX) 13 and to a memory 16.
In some embodiments, the touch input module 11 and/or the display 12 are separate or separable from the electronic device and the processor receives signals from the touch input module 11 and/or transmits and signals to the display 12 via the transceiver 13 or another suitable interface. Furthermore in some embodiments the touch input module ii and display 12 are parts of the same component. In such embodiments the touch interface module 11 and display 12 can he referred to as the display part or touch display part.
The processor 15 can in some embodiments be configured to execute various program codes. The implemented program codes, in some embodiments can comprise such routines as audio signal parsing and decoding of image data, touch processing, input simulation, or tactile effect simulation code where the touch input module inputs are detected and processed, effect feedback signal generation where electrical signals are generated which when passed to a transducer can generate tactile or haptic feedback to the user of the apparatus, or actuator processing configured to generate an actuator signal for driving an actuator. The implemented program codes can in some embodiments be stored for example in the memory 16 and specifically within a program code section 17 of the memory 16 for retrieval by the processor 15 whenever needed. The memory 15 in some embodiments can further provide a section 18 for storing data, for example data that has been processed in accordance with the appflcation, for example pseudo audio signal data.
The touch input module 11 can in some embodiments implement any suitable touch screen interface technology. For example in some embodiments the touch screen interface can comprise a capacitive sensor configured to be sensitive to the presence of a finger above or on the touch screen interface. The capacitive sensor can comprise an insulator (for example glass or plastic), coated with a transparent conductor (for example indium tin oxide ITO). As the human body is also a conductor, touching the surface of the screen results in a distortion of the local electrostatic field, measurable as a change in capacitance. Any suitable technology may be used to determine the location of the touch. The location can be passed to the processor which may calculate how the user's touch r&ates to the device. The insulator protects the conductive layer from dirt, dust or residue from the finger.
In sonic other embodiments the touch input module can be a resistive sensor comprising of several layers of which two are thin, metallic, electrically conductive layers separated by a narrow gap. When an object, such as a finger, presses down on a point on the panels outer surface the two metallic layers become connected at that point: the panel then behaves as a pair of voltage dividers with connected outputs, This physical change therefore causes a change in the electrical current which is registered as a touch event and sent to the processor for processing.
In some other embodiments the touch input module can further determine a touch using technologies such as visual detection for example a camera either located below the surface or over the surface detecting the position of the finger or touching object, projected capacitance detection, infrared detection, surface acoustic wave detection, dispersive signal technology, and acoustic pulse recognition. In some embodiments it would be understood that touch can be defined by both physical contact and hover touch' where there is no physical contact with the sensor but the object located in close proximity with the sensor has an effect on the sensor.
The touch input module as described here is an exampk of a user interface input.
It would be understood that in some other embodiments any other suitable user interface input can be employed to provide an user interface input, for example to select an object, Rem or region from a displayed screen. In some embodiments the user interface input can thus be a keyboard, mouse, keypad, joystick or any suitable pointer device.
The apparatus 10 can in some embodiments be capable of implementing the processing techniques at east partiafly in hardware, in other words the processing carried out by the processor 15 may be implemented at least partiaHy in hardware without the need of software or firmware to operate the hardware.
The transceiver 13 in some embothments enables communicaLion with other electronic devices, for example in some embodiments via a wireless communication network.
The display 12 may comprise any suitable display technology. For example the display element can be located below the touch input module and project an image through the touch input module to be viewed by the user, The display 12 can employ any suitable display technology such as liquid crystal display (LCD), light emitting diodes (LED), organic light emitting diodes (OLED), plasma thsplay cells, Field emission display (FED), surfac&conduction electron-emitter displays (SED), and Electrophoretic displays (also known as electronic paper, e-paper or electronic ink displays). In some embodiments the display 12 employs one of the display technologies projected using a light guide to the display window.
With respect to Figure 2 an example audio enhanced cinernagraph generator is shown. Furthermore with respect to Figure 3 the operation of the example audio enhanced cinemagraph generator as shown in Figure 2 is further described.
In some embodiments the audio enhanced cinemagraph generator comprises a camera 101 or is configured to receive an input from a camera 101. The camera 101 can be any suitable video or image capturing apparatus. The camera 101 can be configured to capture images and pass the image or video data to a video processor 103. In some embodiments the camera block 101 can represent any suitable video or image source. For example in some embodiments the video or images can be retrieved from a suitable video or image storing memory or database of images. The images can be stored locafly, for example within the memory of the audio enhanced cineniagraph apparatus, or in some embodiments can be stored external to the apparatus and received for example via the transceiver.
In some embodiments the audio enhanced cinemagraph generator comprises a user interface input 100 or is configured to receive a suitable user interface input 100. The user interface input 100 can be any suitable user interface input. In [he foHowing examples the user interface input is an input from a touch screen sensor from a touch screen display. However it would be understood that the user interface input in some embodiments can be at least one of: a mouse or pointer input, a keyboard input, and a keypad input. The user interface input 100 is shown with respect to some embodiments in Figure 8 which shows the displayed user interface display at various stages of the cinemagraph generation stage.
The example audio enhanced cinemagraph generator can in some embodiments comprise a video processor 103. The video processor 103 can he configured to receive the image or video data from the camera 101, analyse and process the video images to generate image motion/animation.
Furthermore as shown in Figure 2 the video processor 103 can be configured to receive input signals from the user interface input 100. For example in some embodiments the user interface input 100 can be configured to open or select the video from the camera 101 to be processed The operation of selecting the video is shown in Figure 3 by step 201.
In some embodiments the video processor 103 comprises a video analyser 105, The video analyser 105 can be configured to receive the video or image data 1$ selected by the user interface input 100 and perform an analysis of the image to determine any objects or regions of the image which have meaningful moVon, in other words whether there are any objects or regions with a periodicity suitable for generating a cinemagraph.
With respect to Figure 4 an example video analyser 105 is shown according to some embodiments. Furthermore with respect to Figure 5 a flow diagram of the operation of the example video analyser 105 as shown in Figure 4 is described in further detaU.
in sonic embodiments the video analyser 105 comprises an image motion determiner 351. The image motion determiner 351 is configured to receive the video images.
The operation of receiving images from the camera or memory is shown in Figure 5 by step 451.
In some embodiments the image motion determiner 351 is configured to analyse the video images to determine regions with motion. In some embodiments the image making determiner 351 can be configured to determine regions with periodicities which lend themselves to create meaningful cinemagraphs. In other words to determine regions with motion and whether the region would be suitable for use as a subject region for the cinemagraph.
In some embodiments the determined regions with motion (and ifitered determined regions) can be displayed to the user to be selected as part of the cinemagraph generation operation.
The operation of analysing the images for regions of motion is shown in Figure 5 by step 453.
In some embodiments the image motion determiner 351 can further be configured to analyse the images and specificafly the determined motion regions to determine parameters or characteristics associated with the regions. For example in some embodiments the image motion determiner 351 can be configured to output the determined regions to at least one of a motion periodicity determiner 353, a motion direction determiner 355, and a motion speed/type determiner 357.
In some embodiments the video analyser 105 comprises a motion periodicity determiner 353. The moflon periodicity determiner 353 can be configured to receive the output from the image motion determiner 351 and determine the region motion periodicity. It would be understood that in some embodiments the motion periodicity can be associated with an open oop periodicity in other words where the region does not return to the same location as the initial region location as wefl as closed loop periodicity where the region returns suhstantiauy to the same location as the initial region ocafion. The periodicity of the motion can be output as a parameter associated with the region and in some embodiments as a time intervS where motion occurs within a time period defined by the image capture length.
The operation of determining the region motion periodicity is shown in Figure 5 by step 455.
In some embodiments the video analyser comprises a motion direction determiner 355. The motion direction determiner 355 can be configured to receive the determined motion regions from the image motion determiner 351 and be configured to determine the direction of the motion of the region. The direction of the motion can be output as a parameter associated with the region and in some embodiments associated with a time value or time interval. For example in some embodiments the motion direction determiner 255 can be configured to determine the motion direction being a first direction for a first interval and in a further direction for a further interval, The operation of determining the region motion directionality is shown in Figure 5 by step 457.
In some embodiments the video analyser comprises a motion speed/type determiner 357. The motion speed/type determiner 357 can be configured to receive the region determined by the image motbn determiner 351 and be further conflgured to determine the speed or type of moton associated with the region.
The speed/type of the motion can then be output as a parameter associated with the region and furthermore in some embodiments associated with a time value or time inierval associated with the region.
The operation of determining the motion speed/type for the region is shown in Figure 5 by step 459.
The operation of outputting the determined (or filtered) regions and characteristics associated with the regions such as for example period, direction and speed/type of motion is shown in Figure 5 by step 461.
The operation of analysing the images to determine motion (and parameters such as periodicity) and determining objects or regions which have motion (and are able to create meaningful cinemagraphs) is shown in Figure 3 by step 202.
A meaningful cinernagraph can be considered to refer to image motion regions with suitable motion (and in some embodiments suitable for adding accompaniments such as audio/tactile effects) that do not annoy the observer As discussed herein the video analyser 105 can in some embodiments output the determined objects or regions to the user interface to display to the user such that the user can select one of the objects or regions. In some other embodiments the video analyser 105 can select one of the determined objects or regions according to any suitable selection criteria, The user interface input 100 can then be configured to provide an input to select one of the objects or regions or region for further processing.
The selected (by the user or otherwise) region can then be passed to the region processor 107.
The operaflon of (the user) s&ecting of one of the regions is shown in Figure 3 by step 203.
In some embodiments the video processor 103 comprises a region processor 107.
The region processor can be configured to receive the selected region and perform region processing on the image data in such a way that the output of the region processor is suitable cinemagraph video or image data.
For example in some embodiments the region processor 107 can perform at least one of the foflowing processes, video stabisation, frame selection, region segmentation, and overlay of motion segments on static background. In some embodiments the region processor 107 can perform object detection.
Furthermore in some embodiments from the object or region selected there can be more than one time period or frame group or frame range suitable for providing animation. For example within a region there can be temporal periodicities at two or more different times from which one of the time or frame groups are selected or picked. The picked or selected frames are shown in the time-line below the region.
This for example can be illustrated with respect to an image based example where the object or region shows a toy train. The train completes one full circle which is captured in the first 30 frames of the video. The train then is static or does nothing for the next 100 frames. Then the train reverses for the next 30 frames and completes the circle in the reverse direction. So for a given region there are two 30 frame length periods from which each of the 30 frame length train motions can be possible candidates.
It wHI therefore be understood that the images or frames to be analysed may be associated with each other in some way. That is the images or frames may come from the same video stream or from the same sequence of images that have been captured, such as multiple snapshots based on the same camera lens view or cinemagraph like application.
The operation of region processing on the selected image data is shown In Figure 3bystep2O4.
In some embodiments the region processor 107 and the video processor 103 can output the processed video or image data to the synthroniser 109.
In some embodiments the apparatus comprises an audio signal source 102. The audio signal source 102 can in some embodiments comprise a microphone or microphones. In such embodiments the microphone or microphones output an audio signal to an audic/haptic processor 111. It would be understood that In some embodiments the microphone or microphones are physically separated from the audiolhaptic processor 111 and pass the information via a communications link, such as a wired or wireless link.
In some embodiments the audio signal source 102 comprises an audlo/haptic database. In such embodiments the audio/haptic database can output an audio sIgnal to the audlo/haptic processor 111. The audlo/haptic database can be any suitable database or lInked audio/haptlc signal database. For example the audio/haptlc database can, in some embodiments, be a database of audio/haptlc dips or signals stored on the Internet or within the cloud'. Furthermore in some embodiments the audio database can be a database or collection of audlo/haptic clips, signals or links to audio/haptic signals stored within the memory of the apparatus.
In some embodiments the user Interface input 100 can be configured to control the audio/haptic processor 111 to select a suitable audio and/or haptic file or source.
The operation of the user selecting one of the audio and/or haptic files is shown In Figure 3 by step 205.
In some embodiments the audio/haptic processor 111 can be configured to look up or select the audio/haptic signal or link from the audiolhaptic source 102 based on the motion detected by the video analyser 105.
In some embodiments the audio enhanced cinemagraph generator comprises a audiofhaptic processor 111. The audie/haptic processor 111 can in some embodiments he configured to seler or receive the audio/haptic signal which is processed in a suitable manner.
ln some embodiments the audio/haptic processor 111 could be configured to process the audiofhaptic signal based on the motion detected by the video analyser 105. For example as described herein in sonic embodiments the audio/haptic processor 111 selects the audio/haptic sign& to be associated with the video region. Furthermore in some embodiments the audlo/haptic processor 111 can be configured to modify spatial audio content based on the motion detected by the video analyser, for example to match the movement on the video region. In some embodiments the audio/haptic processor 111 can be configured to add or select haptic signals to generate suitable haptic effects based on the motion detected by the video analyser, for example to match the movement on the video region with haptic effects on the display. In some embodiments the audic/haptic processor 111 can be configured to modify audio or music objects based on the motion detected by the video analyser. In some embodiments the audioihaptic processor 111 can be configured to modulate the pitch of the audio/haptic signal that is being attached based on the motion detected by the video analyser, for example a motion of an object could be smooth periodic rather than jerky and in such situation the audio/haptic processor 111 can be configured to modulate the overall periodicity of the audio according to detected motion.
For example in some embodiments the audioihaptic processor 111 can be configured to perform a beatItempo/rhythm estimation on the audio/haptic signal and select regions of the audlo/haptic signal for looping in the cinemagraph based on the beat calculation values.
The processing of audio and the selection and outputting of candidate regions for the cinemagraph is shown in Figure 3 by step 206.
In some embodiments the user interface input 100 can be used to select, from the candidate regions, a region to be output to the synchroniser 104.
The operation of the user selecUng one option is shown in Figure 3 in step 207.
With respect to Figure 6 an example audiolhaptic processor 111 is shown according to some embodiments. Furthermore with respect to Figure 7 an example operation of the audio/haptic processor 111 as shown in Figure 6 is shown in further detaiL In some embodiments the audlo/haptic processor 111 comprises a candidate determiner 301. The candidate determiner 301 can be configured to rec&ve the video analysis input, in other words the motion parameters determined by the video analyser 105.
The operaVon of receiving the video analysis input is shown in Figure 7 by step 401.
In some embodiments the candidate determiner 301 can be configured to filter a search space of audio/tactile signals based on the motion parameters received. In some embodiments the candidate determiner comprises a database of available or candidate audio signals and/or tachie signals, wherein the audio/tactile signals have associated parameters (such as beat, duration, energy) which can be used as locating parameters on a search space used by the candidate determiner 301 to locate at least one candidate audio/tactile signal based on the motion parameters.
In some embodiments filtering process can be performed by an entity other than the candidate determiner 301. For example in some embodiments the candidate determiner 301 can be configured to output the video analysis input motion parameters to the audio/haptic source 102, which then searches the database of audo/hapuc signals and whkth then returns suitably matched audio/tactile sign&s or finks to suitable audxiltacte signals lo the candidate determiner.
Any suitable searching or filtering operation can be performed. For example in some embodiments an Ndimensional space is searched where each axis in the 4N'dimensional space represents a specific motion parameter (for example direcfion, speed, periodicity) and each potenfial candidate audb/tactUe signal is located within the N'dimensional space.
The operation of fiftering the search space based on the motion parameters for the audio signal or tactlle signal is shown in Figure 7 by step 403.
As described herein in some embodiments a tactile signal to be generated by the apparatus based on the motion analysis can be selected. For example where the video shows in a region a person hilling the wall with a hammer, then the motion analysis shows a sudden jolt or stop when the hammer hits the wall. The sudden directional and speed change of the region can generate parameters which are used by the audio/haptic processor to select an audio/haptic signal with a sudden transient characteristic and in some embodiments a short strong vibration effect is selected to be generated by the apparatus when the hammer hits the wall, Similarly a video showing a region which approaches the camera and then goes past at a steady speed the motion analysis can incUcated constant motion parameters which the audio/haptic processor can select an audio/haptic signal with an constant characteristic, for example a car noise or train noise. In some embodiments where the apparatus has sufficient audio capture or record capacity it would be understood that the audlo/haptic processor can be configured t.o select at least a portion of the audio signals recorded at the time of the video recording.
In some embodiments the filtered audio signal/tactile signal candidates are presented to the user, for example via a user interface display.
The operation of presenting the candidate signals is shown in Agure 7 by step 405.
In such embodiments the candidate determiner 301 can be further configured to receive a user interface input and select a candidate or at least one candidate audio/tacte signal based on the user interface input.
The operation of selecting the candidate audlo/tactUe signal based on the user interface input is shown in Figure 7 by step 407.
It would be understood that the use of user input to assist in the selection of candidate audio/tactfle signals is an optional operation and that as described herein the selection can be determined automaticaHy (for example by selecting the nearest matching audio/tactUe signal), semi-automatically (for example by displaying a number of near searched audio/tacthe signals based on the motion parameters), or manuay (for example by displaying the search space or sub-set of the search space).
In some embodiments the audio/haptic processor comprises an audio/tactfle signal parameter determiner 303. The audloitactile parameter determiner 303 can in some embodiments be configured to receive the selected candidate audio/tactile signal or signals from the candidate determiner 301. Furthermore the audio/tactile parameter determiner 303 can be configured to further analyse the audio/tactile signal to generate at least one associated parameter (such as spatial parameters/music objects or any suitable parameter) to be further processed by the signal or spatial processor 305.
For example in some embodiments the audio/tactile signal parameter determiner 303 can be configured to generate a spatial parameterised version of the audio signal. In such embodiments the audio signal can be divided into time frames which are time to frequency domain converted and filtered into sub-band components. Each sub-band component can then be analysed in such a way that at least one direcuonal component is identified, for example by correlating at least o channels of the audio signal, and the directional component is filtered from the background signal. In sonic embodiments the spatial parameterised version of the audio signal can be that provided by the Directional Audio Coding (DirAC) method wherein a mid (M) signal component with a direction (ci) representing the directional component and side (S) component representing the background signal is generated. However any suitable parameterisation of the audio signal can in some embodiments be generated.
The operation of analysing the audio/tactile signal to generate spatial parameters/music objects is shown in Figure 7 by step 400.
In some embodiments the audlo/haptic processor 111 comprises a signal or spatial processor (or suitable means for signal processing or spatial processing the audio/tactile signal). The spatial processor 305 in some embodiments is configured to receive the audio/tactile signal in a suitable parameterised form such as from the audio/tactile parameter determiner 303 or in some embodiments directly from the candidate determiner 301 where the audio/tactile signal is a pre-parameterised form.
In some embodiments the spatial processor 305 can be configured to spatially process the audio/tactile signal based on the video analysis motion parameters.
For example in some embodiments the motion analy&s from the video analyser determines that the motion direction is one where the object moves from one side to the other (for example left to right) then the audio signal can be processed such that the audio signal is processed in such a way that it is heard moving from one side to the other. For example in some embodiments the spatial processor 305 can be configured to apply amplitude panning to the audio signal, which comprises a stereo or multichannel audio signal (for example a binaural or 6,1 channel audio signal), such that the audio signal has a greater volume or energy from the one side (left) initially which then moves such that by the end of the motion it is heard with a greater volume at the other (right). In sonic embodiments the spatial processing can he any suitable spatial processing such as applying a suitable inter-aural time delay or inter-aural level difference, Similarly in some embodiments where the motion direction is towards or away from the apparatus the spatial processor 305 can be configured to increase or decrease respectively the volume of the audio signal. ln some embodiments the speed of the motion towards the apparatus or away the apparatus can be cause the spatial processor to apply a pitch shift to the audio signal to &mulate the Doppler shift effect, The operation of performing spatial processing of the audioltactUe spatial parameters based on the motion parameters is shown in Figure 7 by step 411.
In some embodiments the audio/tactHe parameter determiner 303, can be configured to apply processing to a component or object of the audio signaL For example where the candidate audio signal is a piece of music the parameterised version of the audio signal can be considered to have separated instruments due to their differing dominant frequendes then the spatial processor can be configured to select a specific instrument (or frequency subhand) on the audio track to be processed to follow the movement pattern from the region. In some embodiments the spatial processor can be configured to apply an echo or other effect to a determined object to model the movement of the region. For example where the analysed video shows waves on water then a tremolo eflect can be produced on a selected subband of the audio signal to provide an equivalent audio experience.
The processed audio/tactile signal can in some embodiments be configured to be output.
The operation of outputting a processed audio signal is shown in Figure 7 step 413.
It would be understood that in some embodiments that the spatial processor 305 can be configured to process the audio/tactile signal according to more than one of these embodiments. Furthermore in some embodiments the candidate determiner 301 and spatial processor 305 can be controlled by a user interface input configured to enable the switching on or off of the various audio enhancements as described herein.
It will also be understood that some embodiments relate to audio processing of the signal without spatial processing.
Although the audio enhancements/processing described herein are described with respect to moving Images within clnemagraphs ft would be understood that in some embodiments similar approaches can be applied to conventional video or moving images and to a single image. Thus for example while recording a single frame or picture a piece of video can be captured (not directly visible to the user). In some embodiments the audio is captured and stored as well, for example starting a couple of seconds beFore taking the picture and ending a couple seconds after. The movement on the video can be analysed with a similar type of movement analysis engine as described herein and based on the analysis the user is provided different alternatives for audio content and processing of the audio content.
In some embodiments the audio/tactile processor 111 can then output the selected/processed audio/tactile signals to the synchroniser 109.
In some embodiments the apparatus comprises a synchroniser 109 configured to receive video information from the video processor 103. audio/tactile information from the audio/tactile processor 111, and user Interface Information from the user interface input 100. In some embodiments the synchroniser 109 can be configured to adjust the audio/tactile and/or video frame parameters and further perform synchronisatIon and enhancement to the audio and video signals prior to outputhng the file information.
For example in some embodiments the synthroniser 109 can display on the user interface an expanded selected audio region to permit the user interface input to select frames for synchronlsing the image to the audio/tactile signal.
The operation of selecting an image frame for synchronlsing with the audio/tactile signal is shown in Figure 3 by step 209.
In some embodiments the video or audio data can furthermore be manipulated such that the audio/tactile and/or video images are warned in time to produce a better finished product. fl 0 to
The operation of adjusting the audio and frame parameters, synchronisation and enhancement operations are shown in Figure 3 by step 210.
The synchroniser 109 can in some embodiments be configured to save the completed cinemagraph or animate image with audio file according to any suitable form at.
In some embodiments the synchroniser 109 can then be configured to mix or multiplex the data to form a cinemagraph or animated image metadata file comprising both image or video data and audio/tacthe signal data. In some embodiments this mixing or multiplexing of data can generate a file comprising at least some of: video data, audio data, tactile signal data, sub region identification data and time synchronisation data according to any suitable format. The mixer and synchroniser 109 can in some embodiments output the metadata or file output data.
The operation of saving the file containing the audio and video information is shown in FigureS by step 212.
It shall be appreciated that the term user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers, Furthermore, it will be understood that the term acoustic sound channels is intended to cover sound outlets, channels and cavities, and that such sound channels may be formed integrally with the transducer, or as part of the mechanical integration of the transducer with the device.
In general, the design of various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or sofiware which may he executed by a coniroller, microprocessor or other computing device, although the invention is not limited thereto. WhHe various aspects of the invention may be iflustrated and described as block diagrams, flow charts, or u&ng some other pictorial representation, it is weU understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non limifing examples, hardware, software, firmware, special purpose drcuits or logic, general purpose hardware or controer or other compuUng devices, or some combination thereof.
The design of embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware.
Further in this regard t should be noted that any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips. or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example OVD and the data variants thereof, CD, The memory used in the design of embodiments of the application may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductorbased memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (OSPs), application specific integrated circuits (ASIC), gate lOVOl circuits and processors based on multicore processor architecture, as nonlimiting
examples.
Embodiments of the inventions may be designed by various components such as integrated circuit modules, As used in this application, the term circuitry' refers to all of the following: (a) hardware-only circuit implementations (such as implementaflons in only analog and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as: (i) to a combinaflon of processor(s) or (ii) to porUons of processor(s)/software (including digital signal processor(s)), software, and memory(S) that work together to cause an apparatus, such as a mobUe phone or server, to perform various functions and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not phy&caHy present.
This definftion of circuitry' appes to aD uses of this term in this application, including any claims. As a further example, as used in this appHcatlon, the term circuitry' would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. The term circuitry' would also cover, for exampe and if appUcable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or simUar integrated circuit in server, a cellular network device, or other network device.
The foregoing description has provided by way of exemplary and nonUmiting examples a full and informative description of the exemplary embodiment of this invention, However, vadous modifications and adaptations may become apparent to those skiDed in the r&evant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims.
However, aD such and similar modifications of the teachings of this invention will still fall within the scope of this invention as defined in the appended claims.

Claims (8)

1. A method comprising: analysing at least two images to detennine at least one region common to the at least two Images; determining at least one parameter associated with a motion of at least one region; determinIng at least one playback signal to be associated with the at least one region; and processing the at least one playback signal based on the at least one parameter.
2. The method as claimed in claim 1, wherein determining at least one parameter associated with a motion of at least one region comprises: determining a motion of the at least one region; and determining at least one parameter based on the motion of the at least one region.
3. The method as claimed in any of claims I and 2, wherein the at least one parameter comprises at least one of: a motion periodlclty a motion direction; a motion speed; and a motion type.
4. The method as claimed in any of claims I to 3, whereIn determining at least one playback signal to be associated with the at least one region comprises determining at least one playback signal based on the at least one parameter.
5. The method as claimed In claim 4, wherein determining at least one playback signal based on the at least one parameter comprises: determining at least two playback signals based on the at least one pars meter; receMng an input to select one of the at least two playback signals; and selecting one of the at least two playback signals based on the Input.
6. The method as claimed In any of claims 4 and 5, wherein determining at least one playback signal based on the at least one parameter comprises: determining for at least one playback signal at least one motion parameter value; and determIning the at least one motion parameter value Is within a determined distance of the at least one parameter.
7. The method as claimed in any of claims I to 6, wherein processing the at least one playback signal based on the at least one parameter comprises at least oneof: spatial processing the at east one playback signal based on the at least one parameter; combining the at least one playback signal to a recorded at least one audio signal based on the at least one parameter and signal processing the at least one playback signal based on the at least one parameter.
8. The method as claimed in claim 7 wherein spatial processing the at least one playback signal based on the at least one parameter comprises modifying the audio field of the at least one playback signal to move based on the motion of the at least one region.t The method as claimed in any of claims I to 8, further comprising: displaying at least one image of the at least two images; and synchronising and outputting the processed at least one playback signal.10. The method as claimed in claims I to 9 whereIn the at least one playback signal comprises at least one of: at least one audio signal; and at least one tactfle signaL 11. The method as daimed in claims 1 to 10, wherein processing the at east one playback signal based on the at least one parameter comprises at least one of: determining witNn the playback signal at east one audio object; and spatially processing the at least one audio object based on the at least one parameter such that the at least one audio object foflows the motion of the at least one region.12. An apparatus comprising: means for analysing at least two images to determine at east one region common to the at least two images; means for determining at least one parameter associated wfth a motion of at least one region; means for determining at east one playback signal to be associated with the at least one region; and means for processing the at east one playback signal based on the at least one parameter.13. The apparatus as claimed in claim 12, wherein the means for determining at east one parameter associated with a motion of at least one region comprises: means for determining a motion of at least one region; and means for determining at least one parameter based on the motion of the at least one region.14. The apparatus as claimed in claim 12 or claim 13, wherein the at least one parameter comprises at least one of: a motion periodicity; a motion direction; a motion speed; and a motion type.15. The apparatus as claimed in any of claims 12 to 14, wherein the means for determining at least one playback signal to be associated with the at ieast one region comprises means for determining at east one playback signa based on the at least one parameter.16. The apparatus as claimed in claim 15, wherein the means for determining at least one playback signal based on the at east one parameter comprises: means for determining at least two playback signals based on the at east one parameter; means for receiving an input to select one of the at least two playback signals; and means for seiecting one of the at least two playback signals based on the input.17. The apparatus as claimed in claim 15 or claim 1$, wherein the means for determining at least one playback signal based on the at least one parameter comprises: means for determining for at east one playback signal at least one motion parameter value; and means for determining the at least one motion parameter value is within a determined distance of the at least one parameter.18. The apparatus as claimed in any of claims 12 to 17, wherein the means for processing the at least one playback signal based on the at least one parameter comprises at least one of: means for spatial processing the at least one playback signal based on the at least one parameter; means for combining the at least one playback signal to a recorded at least one audio signal based on the at least one parameter; and means for signal processing the at least one playback signal based on the at least one parameter.1$. The apparatus as claimed in claim 18, wherein the means for spafial processing the at least one playback signal based on the at east one parameter comprises means for modifying the audio fi&d of the at east one playback signal to move based on the motion of the at least one region.20. The apparatus as claimed in any of claims 12 to 19, comprising: means for displaying at least one image of the at least two images; and means for synchronising and outputting the processed at east one signaL 21. The apparatus as claimed in any of claims 12 to 20, wherein the at least one playback signal comprises at least one of: at east one audio signal; and at east one tactUe signaL 22. The apparatus as claimed in any of claims 12 to 21, wherein the means for processing the at least one playback signal based on the at least one parameter comprises at least one of: means for determining within the playback signal at least one audio object; means for spatially processing the at least one audio object based on the at least one parameter such that the at least one audio object follows the motion of the at least one region.23 An apparatus comprising at least one processor and at least one memory including computer code for one or more programs, the at least one memory and the computer code configured to with the at least one processor cause the apparatus to at least to: analyse at least two images to deterrrune at least one region common to the at least two images; determine at least one parameter associated with a motion of at least one region; determine at least one playback signal to be associated with the at east one region; and 3$ process the at least one playback signal based on the at east one parameter.24. An apparatus comprising: an analyser configured to analyse at least two images to determine at east one region common to the at least two images; a motion determiner configured to determine at east one parameter associated with a motion of at least one region; a playback determiner configured to determine at least one playback signal to be associated with the at least one region; and a processor configured to process the at least one playback signal based on the at least one parameter.
GB1315502.3A 2013-08-30 2013-08-30 An image enhancement apparatus and method Withdrawn GB2518144A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1315502.3A GB2518144A (en) 2013-08-30 2013-08-30 An image enhancement apparatus and method
PCT/FI2014/050650 WO2015028713A1 (en) 2013-08-30 2014-08-27 An image enhancement apparatus and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1315502.3A GB2518144A (en) 2013-08-30 2013-08-30 An image enhancement apparatus and method

Publications (2)

Publication Number Publication Date
GB201315502D0 GB201315502D0 (en) 2013-10-16
GB2518144A true GB2518144A (en) 2015-03-18

Family

ID=49397087

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1315502.3A Withdrawn GB2518144A (en) 2013-08-30 2013-08-30 An image enhancement apparatus and method

Country Status (2)

Country Link
GB (1) GB2518144A (en)
WO (1) WO2015028713A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012001587A1 (en) * 2010-06-28 2012-01-05 Koninklijke Philips Electronics N.V. Enhancing content viewing experience
WO2013116937A1 (en) * 2012-02-09 2013-08-15 Flixel Photos Inc. Systems and methods for creation and sharing of selectively animated digital photos
WO2013175051A1 (en) * 2012-05-25 2013-11-28 Nokia Corporation Method and apparatus for producing a cinemagraph

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5802220A (en) * 1995-12-15 1998-09-01 Xerox Corporation Apparatus and method for tracking facial motion through a sequence of images
US6636220B1 (en) * 2000-01-05 2003-10-21 Microsoft Corporation Video-based rendering
EP2711929A1 (en) * 2012-09-19 2014-03-26 Nokia Corporation An Image Enhancement apparatus and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012001587A1 (en) * 2010-06-28 2012-01-05 Koninklijke Philips Electronics N.V. Enhancing content viewing experience
WO2013116937A1 (en) * 2012-02-09 2013-08-15 Flixel Photos Inc. Systems and methods for creation and sharing of selectively animated digital photos
WO2013175051A1 (en) * 2012-05-25 2013-11-28 Nokia Corporation Method and apparatus for producing a cinemagraph

Also Published As

Publication number Publication date
WO2015028713A1 (en) 2015-03-05
GB201315502D0 (en) 2013-10-16

Similar Documents

Publication Publication Date Title
US20180374252A1 (en) Image point of interest analyser with animation generator
EP2711929A1 (en) An Image Enhancement apparatus and method
CN106648083B (en) Enhanced playing scene synthesis control method and device
US20200057506A1 (en) Systems and Methods for User Generated Content Authoring
US20220276713A1 (en) Touch Display Device with Tactile Feedback
TWI606420B (en) Method, apparatus and computer program product for generating animated images
CN103970266A (en) Haptic sensation recording and playback
EP3343348A1 (en) An apparatus and associated methods
Mann Surveillance (oversight), Sousveillance (undersight), and Metaveillance (seeing sight itself)
US10101813B2 (en) Automatic haptic generation based on color features and motion analysis
US20170256283A1 (en) Information processing device and information processing method
CN103997687A (en) Techniques for adding interactive features to videos
Yang et al. Audio augmented reality: A systematic review of technologies, applications, and future research directions
EP2706531A1 (en) An image enhancement apparatus
WO2017182699A1 (en) Content search
US20160086633A1 (en) Combine Audio Signals to Animated Images
CN104935866A (en) Method, synthesis device and system for realizing video conference
CN116778058B (en) Intelligent interaction system of intelligent exhibition hall
KR20140126529A (en) Physical Movement of Object on Reality-Augmented Reality Interaction System and Implementation Method for Electronic book
GB2518144A (en) An image enhancement apparatus and method
Zhang et al. Automatic generation of spatial tactile effects by analyzing cross-modality features of a video
TW201411552A (en) An image enhancement apparatus
CN117830479A (en) Animation production method and touch screen device

Legal Events

Date Code Title Description
COOA Change in applicant's name or ownership of the application

Owner name: NOKIA TECHNOLOGIES OY

Free format text: FORMER OWNER: NOKIA CORPORATION

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)