WO2015131950A1 - Creating an animation of an image - Google Patents

Creating an animation of an image Download PDF

Info

Publication number
WO2015131950A1
WO2015131950A1 PCT/EP2014/054403 EP2014054403W WO2015131950A1 WO 2015131950 A1 WO2015131950 A1 WO 2015131950A1 EP 2014054403 W EP2014054403 W EP 2014054403W WO 2015131950 A1 WO2015131950 A1 WO 2015131950A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
animation
processor
display
area
Prior art date
Application number
PCT/EP2014/054403
Other languages
French (fr)
Inventor
Robert SEVERN
Matthew Sullivan
Original Assignee
Longsand Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longsand Limited filed Critical Longsand Limited
Priority to PCT/EP2014/054403 priority Critical patent/WO2015131950A1/en
Publication of WO2015131950A1 publication Critical patent/WO2015131950A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Definitions

  • Augmented reality is the integration of digital information with the real-world environment.
  • AR provides a live, direct, or indirect, view of a physical, real-world environment whose elements are augmented by computer- generated sensory input such as sound, video, graphics, or GPS data.
  • AR may include the recognition of an image, an object, a face, or any element within the real-world environment and the tracking of that image by utilizing real-time localization in space.
  • AR may also include superimposing digital media, e.g., video, three- dimensional (3D) images, graphics, text, etc., on top of a view of the real-world environment so as to merge the digital media with the real-world environment.
  • FIG. 1 is a block diagram of a computing device for the creation of an animation of an image
  • Figs. 2A-2E are drawings of sequentially created frames that demonstrate the creation of an animation of an image viewed on a display screen of a device;
  • Fig. 3 is a process flow diagram for creating an animation of an image
  • FIG. 4 is a block diagram showing a non-transitory, computer-readable media that holds code that enables the creation of an animation of an image.
  • Images may be augmented in real-time and in semantic context with environmental elements to enhance a viewer's understanding or informational context.
  • a broadcast image of a sporting event may include
  • AR augmented reality
  • Animated may be defined to include motion of portions of an image, as distinguished from something that is merely static.
  • Examples described herein enable the creation of an animation of an image.
  • An image may be captured using a camera in a device, or may be obtained from a storage device.
  • the device may include a display on which the captured image can be displayed. On the display, an area of the captured image may be selected. The area of the selected image may be modified by a user, for example, by being moved to a different part of the image, to create an animation.
  • the animation may be saved to the device to be retrieved and activated for later use.
  • the animation may be activated when the image is placed within view of the camera of the device.
  • the animation may then be overlaid over the static image, providing the illusion of motion within the image.
  • Fig. 1 is a block diagram of a computing device 100 for the creation of an animation of an image.
  • the computing device 100 may be, for example, a smartphone, a computing tablet, a laptop computer, a desktop computer, among others.
  • the computing device 100 may include a processor 102 that is configured to execute stored instructions, as well as a memory device 104 that stores instructions that are executable by the processor 102.
  • the processor 102 can be a single core processor, a dual-core processor, a multi-core processor, a computing cluster, or the like.
  • the processor 102 may be coupled to the memory device 104 by a bus 106 where the bus 106 may be a communication system that transfers data between various components of the computing device 100.
  • the bus 106 may be a PCI, ISA, PCI-Express, HyperTransport®, NuBus, a proprietary bus, and the like.
  • the memory device 104 can include random access memory (RAM), e.g., SRAM, DRAM, zero capacitor RAM, eDRAM, EDO RAM, DDR RAM, RRAM, PRAM, read only memory (ROM), e.g., Mask ROM, PROM, EPROM, EEPROM, flash memory, or any other suitable memory systems.
  • RAM random access memory
  • ROM read only memory
  • the computing device 100 may also include a graphics processing unit (GPU) 108.
  • the processor 102 may be coupled through the bus 106 to the GPU 108.
  • the GPU 108 may be configured to perform any number of graphics operations within the computing device 100.
  • the GPU 108 may be configured to render or manipulate graphic images, graphic frames, videos, or the like, that may be displayed to a user of the computing device 100.
  • the computing device 100 may also include a storage device 1 10.
  • the storage device 1 10 may include physical memory such as a hard drive, an optical drive, a flash drive, an array of drives, or any combinations thereof.
  • a single unit can function as both the memory device 104 and the storage device 1 10.
  • the processor 102 may be connected through the bus 106 to an input/output (I/O) device interface 1 14 configured to connect the computing device 100 to one or more I/O devices 1 16.
  • the I/O devices 1 16 may include, for example, a keyboard, a mouse, and a pointing device, wherein the pointing device may include a touchpad or a touchscreen, among others.
  • the I/O devices 1 16 may be built-in components of the computing device 100, or located externally to the computing device 100.
  • the processor 102 may also be linked through the bus 106 to a camera 1 18 to capture an image, where the captured image may be stored to the storage device 1 10.
  • the processor 102 may also be linked through the bus 106 to a display interface 120 configured to connect the computing device 100 to a display device 122.
  • the display device 122 may be a built-in component or externally connected to the computing device 100.
  • the display devices 122 may include a display screen of a smartphone, a computing tablet, a computer monitor, a
  • the captured image may be viewed on the display device 122.
  • the display device 122 may be associated with a touch screen to form a touch-sensitive display.
  • the touch screen may allow a user to interact with an object shown on the display device 122 by touching the display device 122 with a pointing device, a finger, or a combination of both.
  • a wireless local area network (WLAN) 124 and a network interface controller (NIC) 126 may also be linked to the processor 102.
  • the WLAN 124 may link the computing device 100 to a network 128 through a radio signal 130.
  • the NIC 126 may link the computing device 100 to the network 128 through a physical connection, such as a cable 132.
  • Either network connection 124 or 126 allows the computing device to access resources attached to the network 128, such as the Internet, printers, fax machines, email, instant messaging applications, and files located on storage servers, among others.
  • the computing device 104 may also link to the network 128 through a wireless wide area network (WWAN) 134, which uses a mobile data signal 136 to communicate with mobile phone towers.
  • WWAN wireless wide area network
  • the storage device 1 10 may include a number of software modules configured to provide the computing device 100 with AR functionality.
  • an image recognition module 134 may be utilized to identify an image. This may be used, for example, to trigger an animation sequence created for the image.
  • animation is the creation of an illusion of continuous motion using a rapid display of a sequence of static images that minimally differ from each other. Thus, a sequence of images in which a selected area changes may be displayed in rapid sequence to create the illusion that the selected area is moving.
  • the image recognition module 134 may be hosted on a separate unit from the computing device 100.
  • the image recognition module 134 may be hosted on a cloud server, allowing for image recognition to take place over a network connection, e.g., via the NIC 126, the WLAN 124, or the WWAN 134, which will then provide the animation information to the local computing device 100.
  • An animation module 136 allows the user to select an area of the captured image, for example, via a touch screen, and apply movements to the selected area.
  • the techniques to select an area may include edge detection, finger- tracking, and the like.
  • the movements may include dragging, rotating, shearing, shrinking, or any other types of movements to manipulate the selected area.
  • a series of sequential images can be automatically created while a user manipulates the selected area.
  • the processor 102 may save the animation to the storage 1 10 of the computing device 100 for later usage.
  • An augmented reality module 138 may instruct the processor 102 to scan for a trigger image, using the camera 1 18 and displaying the environment on the display device 122.
  • content e.g., the animation, may be superimposed over the image on the display device 122.
  • Figs. 2A-2E are drawings of sequentially created frames that demonstrate the creation of an animation 202 of an image 204 viewed on a display screen 206 of a computing device 208.
  • the computing device 208 may be as described with respect to Fig. 1 .
  • the image 204 used for the creation of the animation 202 will be static.
  • a static image is a visual image that does not move, e.g., a photograph, a poster, a newspaper, a painting, or other still images.
  • the image 204 includes a hand-drawn image of a child throwing a ball 210.
  • the image 204 may be captured, for example, using a camera associated with the computing device 208, or may be imported from an external source, such as a picture stored on the computing device 208 or in an external network.
  • a touch screen may be used to select an area of the image 204.
  • the computing device 208 may include techniques, such as edge detection, finger- tracking, or any other type of gesture recognition technologies to allow the user to identify the selected area.
  • edge detection such as edge detection, finger- tracking, or any other type of gesture recognition technologies to allow the user to identify the selected area.
  • finger- tracking such as edge detection, finger- tracking, or any other type of gesture recognition technologies to allow the user to identify the selected area.
  • the ball 210 has been chosen as the selected area and is shown filled in.
  • the user may move the selected area, e.g., the ball 210, on the display screen 206, as depicted in Fig. 2D.
  • the user may have the option of using dragging, rotating, shearing, shrinking, or any other types of movements, to manipulate the ball 210.
  • the ball 210 is dragged across the display screen 206, while a rotating motion is used to turn the ball 210, simulating rotation in the animation 202.
  • a number of frames may be automatically or manually captured during the motion to create the individual frames of the animation 202.
  • an animation 202 is created from the trigger image 204.
  • the animation 202 may be saved to the device for future retrieval and usage.
  • the animation is not limited to the selected area of the image, but may include graphic objects that are imported from other sources, such as a server. For example, logos and other materials relevant to an organization may be imported into the image, selected, and manipulated to create the animation. This may allow a small organization to generate high quality animations without hiring a commercial artist.
  • the augmented reality platform 128, as discussed in Fig. 1 may be used to superimpose the animation 202 onto the trigger image 204.
  • the augmented reality platform 128 may be, for example, an application that is downloaded to the storage device 1 10.
  • an augmented reality platform may use camera technology to scan a real-world environment, including images and objects within the environment, and to overlay information onto the real-world environment as shown on the display screen 206.
  • the user may access the augmented reality platform 128 from the device 208 and then point the device 208 at the trigger image 204.
  • the image recognition software associated with the augmented reality platform determines that a trigger image, such as image 204, is in view of the camera, it retrieves the associated animation 202 from the memory 104 of the device 208, overlays the animation over the image 204, and activates the animation. As a result, the image 204 can appear to have motion when viewed on the display screen 206 of the device 208.
  • a trigger image such as image 204
  • Fig. 3 is a process flow diagram of a method 300 for creating an animation of an image.
  • an image may be captured using a device.
  • the device may include a camera, as an image capturing device.
  • the device may also include a display screen on which the captured image can be displayed to a user.
  • the method 300 is not limited to capturing the image, as an image may be obtained for the animation using any number of other techniques.
  • the image may be imported from another program on the device or imported from another device, such as a server.
  • the user may select an area of the captured image on the display screen.
  • the selected area of the captured image may be moved on the display screen to create an animation.
  • An animation may be defined as the sequential presentation of a numbers of images, each with a slightly different location for the selected area, which creates the illusion of continuous motion.
  • the movements of the selected area may include dragging, rotating, shearing, shrinking, or any other types of movements.
  • a sequence of images can be captured, automatically or manually, to create the animation.
  • the animation may be saved to the device. In particular, the animation may be saved to a storage device for later retrieval and activation.
  • the process flow diagram in Fig. 3 is not intended to indicate that the method 300 is to include all of the components shown in Fig. 3. Further, the method 300 may include fewer or more blocks than what is shown, depending on the details of the specific implementation.
  • the animation can be triggered by pointing the camera of a mobile device at the trigger image. Image recognition software in the mobile device can identify the trigger image, and activate the animation. The animation can be overlaid over the trigger image, giving the illusion that the trigger image has "come to life.”
  • the animation is not limited to an animation of the trigger image. For example, an animation may be created from a first image, and then associated with a second image as the trigger image.
  • the animation is then triggered.
  • the animation may not be present on the device when the animation is triggered.
  • a device may be pointed at an augmented commercial advertisement, such as at a store, and the augmented reality software will recognize the image and download an animation from a server.
  • Fig. 4 is a block diagram showing a non-transitory, computer-readable media 400 that holds code that enables the creation of an animation of an image.
  • the computer-readable media 400 may be accessed by a processor 402 over a system bus 404.
  • the code may direct the processor 402 to perform the steps of the current method as described with respect to Fig. 3.
  • a capture module 406 may be configured to capture an image using a camera built into a device.
  • the image may be a static image such as a drawing, a photograph, or a newspaper clipping, among others.
  • a select module 408 may be configured to select an area of the captured image.
  • the select module 408 may allow a user to select a single area or a plurality of areas of the captured image based upon the user's preferences.
  • the image may depict a child holding a ball. The user may desire that the ball move within the image, thus, the user may select the ball as their desired area to be subjected to movement.
  • a move module 410 may be configured to move the selected area to create an animation.
  • the selected area i.e., the ball
  • the device recording a sequence of images during the movement.
  • the animation of the ball may include a series of images with the ball in a slightly different location in each image. The images may be presented in a timed sequence so as to make the ball appear to move.
  • a save module 412 may be configured with instructions to save the animation to a memory of the device for subsequent retrieval and activation.
  • an image recognition module 414 may identify the trigger image, and play the associated animation on a screen of the device, for example, superimposed over the trigger image.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a method and a system for creating an animation of an image. The method may include obtaining an image using a device, wherein the device displays the image on a display. An area of the captured image is selected. The selected area is moved on the display to create an animation. The animation is saved to the device.

Description

CREATING AN ANIMATION OF AN IMAGE
BACKGROUND
[0001] Augmented reality (AR) is the integration of digital information with the real-world environment. In particular, AR provides a live, direct, or indirect, view of a physical, real-world environment whose elements are augmented by computer- generated sensory input such as sound, video, graphics, or GPS data. AR may include the recognition of an image, an object, a face, or any element within the real- world environment and the tracking of that image by utilizing real-time localization in space. AR may also include superimposing digital media, e.g., video, three- dimensional (3D) images, graphics, text, etc., on top of a view of the real-world environment so as to merge the digital media with the real-world environment.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] Certain examples are described in the following detailed description and in reference to the drawings, in which:
[0003] Fig. 1 is a block diagram of a computing device for the creation of an animation of an image;
[0004] Figs. 2A-2E are drawings of sequentially created frames that demonstrate the creation of an animation of an image viewed on a display screen of a device;
[0005] Fig. 3 is a process flow diagram for creating an animation of an image; and
[0006] Fig. 4 is a block diagram showing a non-transitory, computer-readable media that holds code that enables the creation of an animation of an image.
DETAILED DESCRIPTION OF SPECIFIC EXAMPLES
[0007] Images may be augmented in real-time and in semantic context with environmental elements to enhance a viewer's understanding or informational context. For example, a broadcast image of a sporting event may include
superimposed visual elements, such as lines that appear to be on the field, or arrows that indicate the movement of an athlete. Thus, augmented reality (AR) allows enhanced information about the real-world of a user to be overlaid onto a view of the real world. Further, AR may include the use of animated environments or videos. Animated may be defined to include motion of portions of an image, as distinguished from something that is merely static.
[0008] Examples described herein enable the creation of an animation of an image. An image may be captured using a camera in a device, or may be obtained from a storage device. The device may include a display on which the captured image can be displayed. On the display, an area of the captured image may be selected. The area of the selected image may be modified by a user, for example, by being moved to a different part of the image, to create an animation. The animation may be saved to the device to be retrieved and activated for later use. The animation may be activated when the image is placed within view of the camera of the device. The animation may then be overlaid over the static image, providing the illusion of motion within the image.
[0009] Fig. 1 is a block diagram of a computing device 100 for the creation of an animation of an image. The computing device 100 may be, for example, a smartphone, a computing tablet, a laptop computer, a desktop computer, among others. The computing device 100 may include a processor 102 that is configured to execute stored instructions, as well as a memory device 104 that stores instructions that are executable by the processor 102. The processor 102 can be a single core processor, a dual-core processor, a multi-core processor, a computing cluster, or the like. The processor 102 may be coupled to the memory device 104 by a bus 106 where the bus 106 may be a communication system that transfers data between various components of the computing device 100. In examples, the bus 106 may be a PCI, ISA, PCI-Express, HyperTransport®, NuBus, a proprietary bus, and the like.
[0010] The memory device 104 can include random access memory (RAM), e.g., SRAM, DRAM, zero capacitor RAM, eDRAM, EDO RAM, DDR RAM, RRAM, PRAM, read only memory (ROM), e.g., Mask ROM, PROM, EPROM, EEPROM, flash memory, or any other suitable memory systems. The computing device 100 may also include a graphics processing unit (GPU) 108. As shown, the processor 102 may be coupled through the bus 106 to the GPU 108. The GPU 108 may be configured to perform any number of graphics operations within the computing device 100. For example, the GPU 108 may be configured to render or manipulate graphic images, graphic frames, videos, or the like, that may be displayed to a user of the computing device 100. The computing device 100 may also include a storage device 1 10. The storage device 1 10 may include physical memory such as a hard drive, an optical drive, a flash drive, an array of drives, or any combinations thereof. In some computing devices 100, a single unit can function as both the memory device 104 and the storage device 1 10.
[0011] The processor 102 may be connected through the bus 106 to an input/output (I/O) device interface 1 14 configured to connect the computing device 100 to one or more I/O devices 1 16. The I/O devices 1 16 may include, for example, a keyboard, a mouse, and a pointing device, wherein the pointing device may include a touchpad or a touchscreen, among others. The I/O devices 1 16 may be built-in components of the computing device 100, or located externally to the computing device 100.
[0012] The processor 102 may also be linked through the bus 106 to a camera 1 18 to capture an image, where the captured image may be stored to the storage device 1 10. The processor 102 may also be linked through the bus 106 to a display interface 120 configured to connect the computing device 100 to a display device 122. The display device 122 may be a built-in component or externally connected to the computing device 100. The display devices 122 may include a display screen of a smartphone, a computing tablet, a computer monitor, a
television, or a projector, among others. The captured image may be viewed on the display device 122. In some examples, the display device 122 may be associated with a touch screen to form a touch-sensitive display. The touch screen may allow a user to interact with an object shown on the display device 122 by touching the display device 122 with a pointing device, a finger, or a combination of both.
[0013] A wireless local area network (WLAN) 124 and a network interface controller (NIC) 126 may also be linked to the processor 102. The WLAN 124 may link the computing device 100 to a network 128 through a radio signal 130. Similarly, the NIC 126 may link the computing device 100 to the network 128 through a physical connection, such as a cable 132. Either network connection 124 or 126 allows the computing device to access resources attached to the network 128, such as the Internet, printers, fax machines, email, instant messaging applications, and files located on storage servers, among others. The computing device 104 may also link to the network 128 through a wireless wide area network (WWAN) 134, which uses a mobile data signal 136 to communicate with mobile phone towers.
[0014] The storage device 1 10 may include a number of software modules configured to provide the computing device 100 with AR functionality. For example, an image recognition module 134 may be utilized to identify an image. This may be used, for example, to trigger an animation sequence created for the image. As described herein, animation is the creation of an illusion of continuous motion using a rapid display of a sequence of static images that minimally differ from each other. Thus, a sequence of images in which a selected area changes may be displayed in rapid sequence to create the illusion that the selected area is moving.
[0015] The image recognition module 134 may be hosted on a separate unit from the computing device 100. For example, the image recognition module 134 may be hosted on a cloud server, allowing for image recognition to take place over a network connection, e.g., via the NIC 126, the WLAN 124, or the WWAN 134, which will then provide the animation information to the local computing device 100.
[0016] An animation module 136, allows the user to select an area of the captured image, for example, via a touch screen, and apply movements to the selected area. The techniques to select an area may include edge detection, finger- tracking, and the like. The movements may include dragging, rotating, shearing, shrinking, or any other types of movements to manipulate the selected area. In one example, a series of sequential images can be automatically created while a user manipulates the selected area. The processor 102 may save the animation to the storage 1 10 of the computing device 100 for later usage.
[0017] An augmented reality module 138 may instruct the processor 102 to scan for a trigger image, using the camera 1 18 and displaying the environment on the display device 122. When the trigger image is recognized, content, e.g., the animation, may be superimposed over the image on the display device 122.
[0018] Figs. 2A-2E are drawings of sequentially created frames that demonstrate the creation of an animation 202 of an image 204 viewed on a display screen 206 of a computing device 208. The computing device 208 may be as described with respect to Fig. 1 . The image 204 used for the creation of the animation 202 will be static. As used herein, a static image is a visual image that does not move, e.g., a photograph, a poster, a newspaper, a painting, or other still images. In the example explained in Fig. 2, the image 204 includes a hand-drawn image of a child throwing a ball 210. The image 204 may be captured, for example, using a camera associated with the computing device 208, or may be imported from an external source, such as a picture stored on the computing device 208 or in an external network.
[0019] A touch screen may be used to select an area of the image 204. The computing device 208 may include techniques, such as edge detection, finger- tracking, or any other type of gesture recognition technologies to allow the user to identify the selected area. In the example of Fig. 2C, the ball 210 has been chosen as the selected area and is shown filled in.
[0020] The user may move the selected area, e.g., the ball 210, on the display screen 206, as depicted in Fig. 2D. The user may have the option of using dragging, rotating, shearing, shrinking, or any other types of movements, to manipulate the ball 210. In the example shown in Fig. 2D, the ball 210 is dragged across the display screen 206, while a rotating motion is used to turn the ball 210, simulating rotation in the animation 202. A number of frames may be automatically or manually captured during the motion to create the individual frames of the animation 202. Thus, by moving the selected area, e.g., ball 210, an animation 202 is created from the trigger image 204. The animation 202 may be saved to the device for future retrieval and usage.
[0021] It can be noted that the animation is not limited to the selected area of the image, but may include graphic objects that are imported from other sources, such as a server. For example, logos and other materials relevant to an organization may be imported into the image, selected, and manipulated to create the animation. This may allow a small organization to generate high quality animations without hiring a commercial artist.
[0022] The augmented reality platform 128, as discussed in Fig. 1 , may be used to superimpose the animation 202 onto the trigger image 204. The augmented reality platform 128 may be, for example, an application that is downloaded to the storage device 1 10. For example, an augmented reality platform may use camera technology to scan a real-world environment, including images and objects within the environment, and to overlay information onto the real-world environment as shown on the display screen 206. As an example, the user may access the augmented reality platform 128 from the device 208 and then point the device 208 at the trigger image 204. When the image recognition software associated with the augmented reality platform determines that a trigger image, such as image 204, is in view of the camera, it retrieves the associated animation 202 from the memory 104 of the device 208, overlays the animation over the image 204, and activates the animation. As a result, the image 204 can appear to have motion when viewed on the display screen 206 of the device 208.
[0023] Fig. 3 is a process flow diagram of a method 300 for creating an animation of an image. At block, 302, an image may be captured using a device. In particular, the device may include a camera, as an image capturing device. The device may also include a display screen on which the captured image can be displayed to a user. The method 300 is not limited to capturing the image, as an image may be obtained for the animation using any number of other techniques. For example, the image may be imported from another program on the device or imported from another device, such as a server.
[0024] At block 304, the user may select an area of the captured image on the display screen. At block 306, the selected area of the captured image may be moved on the display screen to create an animation. An animation may be defined as the sequential presentation of a numbers of images, each with a slightly different location for the selected area, which creates the illusion of continuous motion. The movements of the selected area may include dragging, rotating, shearing, shrinking, or any other types of movements. As the selected area is moved, a sequence of images can be captured, automatically or manually, to create the animation. At block 308, the animation may be saved to the device. In particular, the animation may be saved to a storage device for later retrieval and activation.
[0025] The process flow diagram in Fig. 3 is not intended to indicate that the method 300 is to include all of the components shown in Fig. 3. Further, the method 300 may include fewer or more blocks than what is shown, depending on the details of the specific implementation. For example, the animation can be triggered by pointing the camera of a mobile device at the trigger image. Image recognition software in the mobile device can identify the trigger image, and activate the animation. The animation can be overlaid over the trigger image, giving the illusion that the trigger image has "come to life." [0026] The animation is not limited to an animation of the trigger image. For example, an animation may be created from a first image, and then associated with a second image as the trigger image. When the second image is recognized in the view of the camera, the animation is then triggered. Similarly, the animation may not be present on the device when the animation is triggered. For example, a device may be pointed at an augmented commercial advertisement, such as at a store, and the augmented reality software will recognize the image and download an animation from a server.
[0027] Fig. 4 is a block diagram showing a non-transitory, computer-readable media 400 that holds code that enables the creation of an animation of an image. The computer-readable media 400 may be accessed by a processor 402 over a system bus 404. The code may direct the processor 402 to perform the steps of the current method as described with respect to Fig. 3.
[0028] Additionally, the various components of the computing system 100 discussed, with respect to Fig. 1 , may be stored on the non-transitory, computer- readable media 400, as shown in Fig. 4. For example, a capture module 406 may be configured to capture an image using a camera built into a device. The image may be a static image such as a drawing, a photograph, or a newspaper clipping, among others. A select module 408 may be configured to select an area of the captured image. In particular, the select module 408 may allow a user to select a single area or a plurality of areas of the captured image based upon the user's preferences. For example, the image may depict a child holding a ball. The user may desire that the ball move within the image, thus, the user may select the ball as their desired area to be subjected to movement.
[0029] A move module 410 may be configured to move the selected area to create an animation. In the example of the child holding the ball, the selected area, i.e., the ball, may be moved from one point to another point, with the device recording a sequence of images during the movement. Thus, the animation of the ball may include a series of images with the ball in a slightly different location in each image. The images may be presented in a timed sequence so as to make the ball appear to move. A save module 412 may be configured with instructions to save the animation to a memory of the device for subsequent retrieval and activation. For example, an image recognition module 414 may identify the trigger image, and play the associated animation on a screen of the device, for example, superimposed over the trigger image.
[0030] While the present techniques may be susceptible to various
modifications and alternative forms, the exemplary examples discussed above have been shown only by way of example. It is to be understood that the technique is not intended to be limited to the particular examples disclosed herein. Indeed, the present techniques include all alternatives, modifications, and equivalents falling within the true spirit and scope of the appended claims.

Claims

CLAIMS What is claimed is:
1 . A method for creating an animation of an image, comprising:
obtaining an image using a device, wherein the device displays the image on a display;
selecting an area of the image;
moving the selected area on the display to create an animation; and saving the animation to the device.
2. The method of claim 1 , comprising:
pointing the device at a copy of the image;
recognizing the image; and
overlaying the animation over the copy of the image shown on the display.
3. The method of claim 1 , comprising:
importing a separate graphic into the image; and
selecting the separate graphic as the area of the image.
4. The method of claim 1 , comprising capturing the image using a camera.
5. The method of claim 1 , wherein the selected area of the image is moved using dragging, rotating, shearing, shrinking, or any combinations thereof.
6. The method of claim 1 , wherein a number of frames are captured during the movement of the captured image to create individual frames of the animation.
7. A device for creating an animation of an image, comprising
a processor configured to execute instructions; and
a storage device that stores instructions, the storage device comprising code to direct the processor to:
select an area of a displayed image; move the selected area to create an animation of the displayed image; and
save the animation.
8. The device of claim 7, comprising:
a camera configured to scan the environment;
a display to show the environment as it is scanned; and
code configured to direct the processor to:
recognize the image when it is in view of the camera; and overlay the animation onto the image on the display.
9. The device of claim 7, wherein the code configured to direct the processor to select the area of the image includes edge detection, finger-tracking, or any other type of gesture recognition techniques.
10. The device of claim 7, comprising code configured to direct the processor to move the selected area by dragging, rotating, shearing, or shrinking, or any combinations thereof.
1 1 . The device of claim 7, comprising code configured to direct the processor to display capture a sequence of frames as the selected area is moved to create the animation.
12. The device of claim 7, wherein the image is a drawing, a photograph, or a static image.
13. The device of claim 7, comprising a graphics processing unit.
14. The device of claim 7, comprising a touch screen over the display screen.
15. A non-transitory, machine-readable medium comprising instructions that when executed by a processor create an animation of an image by:
capturing an image using a device;
selecting an area of the captured image;
moving the selected area to create an animation; and
saving the animation to a memory of the device.
PCT/EP2014/054403 2014-03-06 2014-03-06 Creating an animation of an image WO2015131950A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2014/054403 WO2015131950A1 (en) 2014-03-06 2014-03-06 Creating an animation of an image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2014/054403 WO2015131950A1 (en) 2014-03-06 2014-03-06 Creating an animation of an image

Publications (1)

Publication Number Publication Date
WO2015131950A1 true WO2015131950A1 (en) 2015-09-11

Family

ID=50288036

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/054403 WO2015131950A1 (en) 2014-03-06 2014-03-06 Creating an animation of an image

Country Status (1)

Country Link
WO (1) WO2015131950A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US858534A (en) * 1905-11-27 1907-07-02 Amos Sawyer Petrie Railway-tie.
US20090119597A1 (en) * 2007-08-06 2009-05-07 Apple Inc. Action representation during slide generation
US20130219344A1 (en) * 2012-02-17 2013-08-22 Autodesk, Inc. Editable motion trajectories

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US858534A (en) * 1905-11-27 1907-07-02 Amos Sawyer Petrie Railway-tie.
US20090119597A1 (en) * 2007-08-06 2009-05-07 Apple Inc. Action representation during slide generation
US20130219344A1 (en) * 2012-02-17 2013-08-22 Autodesk, Inc. Editable motion trajectories

Similar Documents

Publication Publication Date Title
US10016679B2 (en) Multiple frame distributed rendering of interactive content
KR102166861B1 (en) Enabling augmented reality using eye gaze tracking
CN108876934B (en) Key point marking method, device and system and storage medium
US9264479B2 (en) Offloading augmented reality processing
US9240070B2 (en) Methods and systems for viewing dynamic high-resolution 3D imagery over a network
US20180276882A1 (en) Systems and methods for augmented reality art creation
US20140002443A1 (en) Augmented reality interface
EP2972950B1 (en) Segmentation of content delivery
US9269324B2 (en) Orientation aware application demonstration interface
US20170046879A1 (en) Augmented reality without a physical trigger
CA2898668A1 (en) Realization method and device for two-dimensional code augmented reality
US20210166461A1 (en) Avatar animation
US20160086365A1 (en) Systems and methods for the conversion of images into personalized animations
US20170043256A1 (en) An augmented gaming platform
US11451721B2 (en) Interactive augmented reality (AR) based video creation from existing video
CN109461215B (en) Method and device for generating character illustration, computer equipment and storage medium
EP3652704B1 (en) Systems and methods for creating and displaying interactive 3d representations of real objects
US11562538B2 (en) Method and system for providing a user interface for a 3D environment
US11095956B2 (en) Method and system for delivering an interactive video
US20180059880A1 (en) Methods and systems for interactive three-dimensional electronic book
WO2015131950A1 (en) Creating an animation of an image
Rattanarungrot et al. A Mobile Service Oriented Multiple Object Tracking Augmented Reality Architecture for Education and Learning Experiences.
US20240185546A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth
US20230334790A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
US20240119690A1 (en) Stylizing representations in immersive reality applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14710512

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14710512

Country of ref document: EP

Kind code of ref document: A1