WO2012079614A1 - Method for creating merged media data with a mobile device - Google Patents

Method for creating merged media data with a mobile device Download PDF

Info

Publication number
WO2012079614A1
WO2012079614A1 PCT/EP2010/007757 EP2010007757W WO2012079614A1 WO 2012079614 A1 WO2012079614 A1 WO 2012079614A1 EP 2010007757 W EP2010007757 W EP 2010007757W WO 2012079614 A1 WO2012079614 A1 WO 2012079614A1
Authority
WO
WIPO (PCT)
Prior art keywords
media data
user
mobile device
frames
frame
Prior art date
Application number
PCT/EP2010/007757
Other languages
French (fr)
Inventor
Fredrik Johansson
Original Assignee
Sony Ericsson Mobile Communications Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Ericsson Mobile Communications Ab filed Critical Sony Ericsson Mobile Communications Ab
Priority to PCT/EP2010/007757 priority Critical patent/WO2012079614A1/en
Publication of WO2012079614A1 publication Critical patent/WO2012079614A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00132Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture in a digital photofinishing system, i.e. a system where digital photographic images undergo typical photofinishing processing, e.g. printing ordering
    • H04N1/00167Processing or editing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00132Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture in a digital photofinishing system, i.e. a system where digital photographic images undergo typical photofinishing processing, e.g. printing ordering
    • H04N1/00185Image output
    • H04N1/00196Creation of a photo-montage, e.g. photoalbum

Definitions

  • the present invention relates to a method for creating merged media data with a mobile device and a mobile device adapted to create merged media data.
  • a user of a mobile device e.g. a digital camera or a mobile phone comprising a digital camera
  • the user may want to arrange these pictures in a collage which may then be printed, represented on a webpage in the internet, or sent via a multimedia service to a friend.
  • arranging and combining the taken pictures to create a collage may be a time-consuming procedure, especially if the taken pictures have to be specifically aligned or cropped before matching to the collage.
  • this object is achieved by a method for creating merged media data with a mobile device as defined in claim 1 and a mobile device as defined in claim 9.
  • the dependent claims define preferred and advantageous embodiments of the invention.
  • a method for creating merged media data with a mobile device comprises a media capturing device, for example a camera, adapted to capture environmental information as media data.
  • a template definition is received from a user.
  • the template definition comprises a layout comprising a plurality of frames. Each frame is adapted to represent media data.
  • the user is requested to capture media data with the media capturing device, and for each of the plurality of frames the captured media data is stored.
  • the merged media data is created by combining the stored media data of the plurality of frames.
  • a user when a user takes for example pictures as the media data, the user can adapt a size, a picture detail, or an orientation of the picture such that it matches to the frame and the layout of the template. Furthermore, after having captured the last picture for the last frame, a merged picture is automatically created and stored which may be directly sent via a multimedia service to a friend or a server. Therefore, a time consuming postprocessing can be avoided and transmission cost for sending a plurality of pictures can be avoided when the merged media data is sent as single merged picture collage.
  • the step of receiving the template definition from the user comprises displaying a plurality of predetermined template definitions on a display of the mobile device and receiving an input from the user selecting one of the template definitions from the plurality of predetermined template definitions.
  • the predetermined template definitions may comprise template definitions having different numbers of frames and different arrangements of the frames. For example, several template definitions with three frames each and different arrangements may be provided.
  • the user wants to take three pictures of a scene, the user may select one of the templates having an appropriate arrangement of the three frames and after having selected the appropriate template, the user is requested to capture for each of the three frames a corresponding picture. Therefore, the method is also suitable in situations in which the user wants to take the pictures within a short time , for example in a series of snapshots.
  • an empty template is displayed on a display of the mobile device and via a graphical user interface of the mobile device the user enters at least one frame margin definition for defining the plurality of frames.
  • This embodiment allows the user to create a self-defined template definition comprising a plurality of frames whose arrangement is user-defined.
  • the user inputs the frame margin definitions to the mobile device. This may be accomplished by use of a touch-sensitive surface of the mobile device.
  • the user may define the frame margin definitions by drawing lines on the touch-sensitive surface with a finger and when the user has finished the frame margin definition a plurality of frames is automatically determined based on the drawn lines and the areas separated by the drawn lines .
  • the media data comprises video data and/or still image data.
  • the video data as well as the still image data may be captured with a camera of the mobile device.
  • Each frame may assigned a still image or a video sequence of user defined length.
  • the resulting merged media data comprises the still image data as well as the video data and may be viewed on the mobile device or sent to another mobile device via a multimedia service or may be represented on a webpage in the internet.
  • the media data which is currently cap- tured by the media capturing device may continuously be dis- played in the frame and an input from the user may be received which triggers the currently captured media data to be stored as the media data of the frame.
  • the triggering may trigger to capture a still image or may trigger to record a video sequence of user defined length.
  • the user gets a direct impression how the media data will appear in the layout of the merged media data.
  • the media data comprises audio data.
  • the audio data may be recorded via a microphone of the mobile device.
  • a symbol representing audio data may be displayed.
  • the symbol representing the audio data may comprise operating symbols for playing back the audio data.
  • the operating symbols of the frame containing the audio data may be activated by a user to play back the audio data.
  • the audio data may automatically be played back in an endless loop.
  • the stored media data of the plurality of frames is automatically deleted after the merged media data has been created. This helps to reduce the amount of the memory needed in the mobile device for storing media data .
  • the media data comprises still image data and the merged media data comprises a single still image data file.
  • the template definition comprises a layout of five frames
  • the user takes five pictures for the five frames and the resulting layout is automatically stored as a single picture file.
  • the merged media data may be com- pressed, for example according to a known picture compressing algorithm, for example according to a JPEG-standard .
  • a mobile device comprises a media capturing device adapted to capture environmental information as media data, a display for displaying output information to a user of the mobile device, an input device for receiving input information from the user, and a processing unit.
  • the media capturing device may comprise for example a camera adapted to capture still images or video sequences.
  • the media capturing device may comprise a microphone for capturing audio data.
  • the display may comprise a liquid crystal display adapted to display a graphical user interface and image data.
  • the display may be adapted to display colored pictures.
  • the display may comprise a touch sensitive surface such that information may be input from the user via the touch sensitive surface which acts as the input device.
  • the input device may comprise further operating devices, for example a push button to trigger a capturing of still image or video data.
  • the processing unit is adapted to receive a template definition from the user.
  • the template definition comprises a layout of a plurality of frames and each frame is adapted to represent media data.
  • the processing unit is furthermore adapted to request the user to capture media data for each of the plurality of frames with the media capturing device and to store the captured media data.
  • the processing unit is adapted to create merged media data by combining the stored media data of the plurality of frames based on the template definition.
  • the mobile device may be adapted to perform the above- described method and comprises therefore the above-described advantages .
  • the mobile device may comprise a mobile phone, a personal digital assistant, a digital camera or a navigation system.
  • Fig. 1 shows a block diagram of a mobile device according to an embodiment of the present invention.
  • Fig. 2 shows schematically a mobile device displaying a plurality of template definitions according to an embodiment of the present invention.
  • Fig. 3 shows schematically a mobile device displaying a user defined template definition.
  • Fig. 4 shows schematically a mobile device capturing environmental image information in a first frame according to an embodiment of the present invention.
  • Fig. 5 shows the mobile device of Fig. 4 capturing environmental image information in a second frame.
  • Fig. 6 shows the mobile device of Fig. 4 capturing environ- mental image information in a third frame.
  • Fig. 1 shows a mobile device 10, for example a mobile phone, schematically in more detail.
  • the mobile device 10 comprises a camera 11, a microphone 12, a display 13, a touch-sensitive surface 14, a processing unit 15, a memory 16, a loudspeaker 17, a radio frequency unit 18 and an antenna 19.
  • the camera 11 may be adapted to take pictures or video sequences of an environment around the mobile device 10.
  • the microphone 12 may be adapted to receive audio data from an environment of the mobile device 10.
  • the microphone 12 may furthermore be adapted to receive voice data to be transmitted in a mobile communication via the radio frequency unit 18 and the antenna 19 to another communication device.
  • the loudspeaker 17 may be adapted to play back audio data, for example audio data received with the microphone 12 and stored in memory 16, or audio data received via the antenna 19 and the radio frequency unit 18 from another communication device.
  • the radio frequency unit 18 may be furthermore adapted to receive and transmit data of a mul- timedia service, a so-called MMS .
  • the touch-sensitive surface 14 may be arranged on an upper side surface of the display 13 and thus the display 13 and the touch-sensitive surface 14 may compose a so-called touch screen.
  • the touch screen may be adapted to operate the mobile device 10 by displaying operating symbols on the display 13 and receiving operating inputs from a user via the touch-sensitive surface 1 .
  • Fig. 2 shows an output on the display 13 of the mobile device 10 when an application for creating a collage according to an embodiment of the present invention has been started.
  • the term "collage" is used for an arrangement of a plurality of media data on a display area.
  • the media data may comprise for example still images which are arranged in a certain layout.
  • the collage may be displayed on the display 13 of the mobile device 10 or may be displayed on any other kind of display, for example on a display of a computer or a display of a TV.
  • the collage may be printed on paper or photographic paper.
  • the layout comprises a plurality of frames which are arranged in a certain manner in the display area. In each frame image data may be represented.
  • a plurality of template definitions are shown, each comprising a different layout comprising a plurality of frames.
  • a first template definition 21 is shown comprising a layout of three frames 1-3 which are arranged one below the other. The numbering of the frames indicates an order in which the frames are to be filled with media data as will be explained in connection with Fig. 4-6.
  • a second template definition 22 is shown comprising two frames 1, 2, wherein frame 1 is a larger frame and frame 2 is a smaller frame covering a lower area of the larger frame 1.
  • a third template definition 23 is shown comprising two frames 1, 2.
  • Frame 1 is a circular frame covering a center part of the larger underlying frame 2.
  • a fourth template definition 24 is shown which is currently empty and which may be defined by the user.
  • the user may select one of the template definitions 21-24 by touching the touch sensitive surface 14 which is arranged above the display 13. Furthermore, the user may select one of the template definitions 21-24 by using buttons 25-27 of the mobile device 10.
  • an empty template definition will be displayed on the display 13 and the user is prompted to enter in a freehand drawing mode frame margin definitions on the display 13 to define a plurality of frames.
  • the user may for example move the finger on the display 13 along a line 31 starting from point 32 and ending at point 33 as shown in Fig. 3. After the user has defined this margin line 31, two frames 1, 2 are automatically defined.
  • the user may then define more frame margin definitions for subdividing the frames 1 or 2.
  • the user indicates for example by pressing one of the buttons 25 to 27 that the user-defined template definition is completed.
  • Fig. 4 shows this situation in case the user has selected template definition 21.
  • image data of an environment 40 of the mobile device 10 is continuously displayed in frame 1 of the template definition 21.
  • Frame 1 now acts as a view finder of the camera 11.
  • the user can select the image sec- tion of the environment 40 which shall be captured.
  • the user may use one of the buttons 25-27 or simply touch frame 1.
  • frame 1 is rendered with this image.
  • the user is now prompted to take a second picture for the second frame 2 as shown in Fig. 5.
  • the area of frame 2 acts as a viewfinder for the camera 11.
  • the user may select an appropriate image section of the environment 40 to take the picture for frame 2.
  • the user is prompted to take a picture for frame 3 while in frames 1 and 2 the previously taken pictures are displayed as shown in Fig. 6.
  • the current image composition can be reviewed in real time on the display 13 as the user takes the images or photographs. This may help the user to select the next image section.
  • the mobile device 10 When the last picture for the last frame is taken, the mobile device 10 will combine the taken pictures and create a single image which is stored in memory 16.
  • the images for the single frames 1-3 may be discarded to safe memory space in memory 16.
  • the single image file may be transmitted via the radio frequency unit 18 and the antenna 19 via a multimedia service to an online service to be viewed by other people or friends.
  • a typical use case for making such a collage is for example when a user is on vacation and wants to share his experience with a friend by taking pictures and sending them as MMS. Instead of sending multiple image files, with the above- described method several images can be combined into one collage and the user only needs to pay for one transmission. Furthermore, for example if the user wants to sell some furniture at an auction website and the website allows only one image per advertisement to be uploaded, the user can take several pictures of the furniture and combine them into one file and load up the combined file.
  • each of the images in the collage may be treated with different visual effects, for example convert to black/white, enhance saturation, blur, sharpening and so on. This may be user-selectable during taking the images or afterwards.
  • not only still images may be captured and rendered into the frames, but also video sequences may be captured and associated with a frame.
  • audio data may be captured and associated with a frame.
  • images which have been taken before and which are stored in memory 16 may be rendered upon user command in any of the frames 1-3.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method for creating merged media data with a mobile device (10) is provided. The mobile device (10) comprises a media capturing device (11, 12) adapted to captured environmental information (40) as media data. According to the method a template definition (21-24) is received from a user. The template definition (21-24) comprises a layout of a plurality of frames (1-3) adapted to represent media data. For each of the plurality of frames (1-3) the user is requested to capture media data with the media capturing device (11, 12) and the captured media data is stored. Finally, based on the template definition (21-24) the merged media data is created by combining the stored media data of the plurality of frames (1-3) based on the template definition.

Description

METHOD FOR CREATING MERGED MEDIA DATA WITH A MOBILE DEVICE
The present invention relates to a method for creating merged media data with a mobile device and a mobile device adapted to create merged media data.
BACKGROUND OF THE INVENTION
When a user of a mobile device, e.g. a digital camera or a mobile phone comprising a digital camera, has taken several pictures, the user may want to arrange these pictures in a collage which may then be printed, represented on a webpage in the internet, or sent via a multimedia service to a friend. However, arranging and combining the taken pictures to create a collage may be a time-consuming procedure, especially if the taken pictures have to be specifically aligned or cropped before matching to the collage.
Therefore, there is a need to provide a method which enables a user to create an arrangement of several pictures in a fast and easy way.
SUMMARY OF THE INVENTION
According to the present invention, this object is achieved by a method for creating merged media data with a mobile device as defined in claim 1 and a mobile device as defined in claim 9. The dependent claims define preferred and advantageous embodiments of the invention.
According to an aspect of the present invention a method for creating merged media data with a mobile device is provided. The mobile device comprises a media capturing device, for example a camera, adapted to capture environmental information as media data. According to the method a template definition is received from a user. The template definition comprises a layout comprising a plurality of frames. Each frame is adapted to represent media data. For each of the plurality of frames the user is requested to capture media data with the media capturing device, and for each of the plurality of frames the captured media data is stored. Based on the template definition the merged media data is created by combining the stored media data of the plurality of frames. Thus, when a user takes for example pictures as the media data, the user can adapt a size, a picture detail, or an orientation of the picture such that it matches to the frame and the layout of the template. Furthermore, after having captured the last picture for the last frame, a merged picture is automatically created and stored which may be directly sent via a multimedia service to a friend or a server. Therefore, a time consuming postprocessing can be avoided and transmission cost for sending a plurality of pictures can be avoided when the merged media data is sent as single merged picture collage.
According to an embodiment the step of receiving the template definition from the user comprises displaying a plurality of predetermined template definitions on a display of the mobile device and receiving an input from the user selecting one of the template definitions from the plurality of predetermined template definitions. The predetermined template definitions may comprise template definitions having different numbers of frames and different arrangements of the frames. For example, several template definitions with three frames each and different arrangements may be provided. When the user wants to take three pictures of a scene, the user may select one of the templates having an appropriate arrangement of the three frames and after having selected the appropriate template, the user is requested to capture for each of the three frames a corresponding picture. Therefore, the method is also suitable in situations in which the user wants to take the pictures within a short time , for example in a series of snapshots.
According to another embodiment, for receiving the template definition from the user, an empty template is displayed on a display of the mobile device and via a graphical user interface of the mobile device the user enters at least one frame margin definition for defining the plurality of frames. This embodiment allows the user to create a self-defined template definition comprising a plurality of frames whose arrangement is user-defined. For defining the frames the user inputs the frame margin definitions to the mobile device. This may be accomplished by use of a touch-sensitive surface of the mobile device. For example, the user may define the frame margin definitions by drawing lines on the touch-sensitive surface with a finger and when the user has finished the frame margin definition a plurality of frames is automatically determined based on the drawn lines and the areas separated by the drawn lines .
According to another embodiment, the media data comprises video data and/or still image data. The video data as well as the still image data may be captured with a camera of the mobile device. Each frame may assigned a still image or a video sequence of user defined length. The resulting merged media data comprises the still image data as well as the video data and may be viewed on the mobile device or sent to another mobile device via a multimedia service or may be represented on a webpage in the internet.
When requesting the user to capture media data for one of the plurality of frames, the media data which is currently cap- tured by the media capturing device may continuously be dis- played in the frame and an input from the user may be received which triggers the currently captured media data to be stored as the media data of the frame. The triggering may trigger to capture a still image or may trigger to record a video sequence of user defined length. As the media data captured by the media capturing device is continuously displayed in the frame, the user gets a direct impression how the media data will appear in the layout of the merged media data.
According to another embodiment the media data comprises audio data. The audio data may be recorded via a microphone of the mobile device. In the corresponding frame a symbol representing audio data may be displayed. Furthermore, the symbol representing the audio data may comprise operating symbols for playing back the audio data. When the merged media data is reproduced for example on a web page in the internet or on a mobile device, the operating symbols of the frame containing the audio data may be activated by a user to play back the audio data. Furthermore the audio data may automatically be played back in an endless loop.
According to another embodiment the stored media data of the plurality of frames is automatically deleted after the merged media data has been created. This helps to reduce the amount of the memory needed in the mobile device for storing media data .
According to another embodiment the media data comprises still image data and the merged media data comprises a single still image data file. When for example the template definition comprises a layout of five frames, the user takes five pictures for the five frames and the resulting layout is automatically stored as a single picture file. Before storing the single file of merged media data, the merged media data may be com- pressed, for example according to a known picture compressing algorithm, for example according to a JPEG-standard .
According to another aspect of the present invention a mobile device is provided. The mobile device comprises a media capturing device adapted to capture environmental information as media data, a display for displaying output information to a user of the mobile device, an input device for receiving input information from the user, and a processing unit. The media capturing device may comprise for example a camera adapted to capture still images or video sequences. Furthermore, the media capturing device may comprise a microphone for capturing audio data. The display may comprise a liquid crystal display adapted to display a graphical user interface and image data. The display may be adapted to display colored pictures. Furthermore, the display may comprise a touch sensitive surface such that information may be input from the user via the touch sensitive surface which acts as the input device. Furthermore the input device may comprise further operating devices, for example a push button to trigger a capturing of still image or video data. The processing unit is adapted to receive a template definition from the user. The template definition comprises a layout of a plurality of frames and each frame is adapted to represent media data. The processing unit is furthermore adapted to request the user to capture media data for each of the plurality of frames with the media capturing device and to store the captured media data. Furthermore, the processing unit is adapted to create merged media data by combining the stored media data of the plurality of frames based on the template definition.
The mobile device may be adapted to perform the above- described method and comprises therefore the above-described advantages . The mobile device may comprise a mobile phone, a personal digital assistant, a digital camera or a navigation system.
Although specific features described in the above summary and the following detailed descriptions are described in connection with specific embodiments, it is to be understood that the features of the embodiments can be combined with each other unless noted otherwise.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will now be described in more detail with reference to the accompanying drawings.
Fig. 1 shows a block diagram of a mobile device according to an embodiment of the present invention.
Fig. 2 shows schematically a mobile device displaying a plurality of template definitions according to an embodiment of the present invention.
Fig. 3 shows schematically a mobile device displaying a user defined template definition.
Fig. 4 shows schematically a mobile device capturing environmental image information in a first frame according to an embodiment of the present invention.
Fig. 5 shows the mobile device of Fig. 4 capturing environmental image information in a second frame.
Fig. 6 shows the mobile device of Fig. 4 capturing environ- mental image information in a third frame. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
In the following, exemplary em 'Odiments of the present inven- tion will be described in more detail. It has to be understood that the following description is given only for the purpose of illustrating the principles of the invention and is not to be taken in the limiting sense Rather, the scope of the in- vention is defined only by the appended claims and not in- tended to be limited by the ex mplary embodiments hereinafter.
It is to be understood that the features of the various exemplary embodiments described herein maybe combined with each other unless specifically noted otherwise. Same reference signs in the various instances of the drawings refer to similar or identical components.
Fig. 1 shows a mobile device 10, for example a mobile phone, schematically in more detail. The mobile device 10 comprises a camera 11, a microphone 12, a display 13, a touch-sensitive surface 14, a processing unit 15, a memory 16, a loudspeaker 17, a radio frequency unit 18 and an antenna 19. The camera 11 may be adapted to take pictures or video sequences of an environment around the mobile device 10. The microphone 12 may be adapted to receive audio data from an environment of the mobile device 10. The microphone 12 may furthermore be adapted to receive voice data to be transmitted in a mobile communication via the radio frequency unit 18 and the antenna 19 to another communication device. The loudspeaker 17 may be adapted to play back audio data, for example audio data received with the microphone 12 and stored in memory 16, or audio data received via the antenna 19 and the radio frequency unit 18 from another communication device. The radio frequency unit 18 may be furthermore adapted to receive and transmit data of a mul- timedia service, a so-called MMS . The touch-sensitive surface 14 may be arranged on an upper side surface of the display 13 and thus the display 13 and the touch-sensitive surface 14 may compose a so-called touch screen. The touch screen may be adapted to operate the mobile device 10 by displaying operating symbols on the display 13 and receiving operating inputs from a user via the touch-sensitive surface 1 .
Fig. 2 shows an output on the display 13 of the mobile device 10 when an application for creating a collage according to an embodiment of the present invention has been started. In the context of this description the term "collage" is used for an arrangement of a plurality of media data on a display area. The media data may comprise for example still images which are arranged in a certain layout. The collage may be displayed on the display 13 of the mobile device 10 or may be displayed on any other kind of display, for example on a display of a computer or a display of a TV. Furthermore, the collage may be printed on paper or photographic paper. The layout comprises a plurality of frames which are arranged in a certain manner in the display area. In each frame image data may be represented.
On the display 13 of the mobile device 10 a plurality of template definitions are shown, each comprising a different layout comprising a plurality of frames. In the upper left area of the display 13 a first template definition 21 is shown comprising a layout of three frames 1-3 which are arranged one below the other. The numbering of the frames indicates an order in which the frames are to be filled with media data as will be explained in connection with Fig. 4-6. In the upper right area of the display 13 a second template definition 22 is shown comprising two frames 1, 2, wherein frame 1 is a larger frame and frame 2 is a smaller frame covering a lower area of the larger frame 1. In the lower left area of the display 13 a third template definition 23 is shown comprising two frames 1, 2. Frame 1 is a circular frame covering a center part of the larger underlying frame 2. In the lower right area of the display 13 a fourth template definition 24 is shown which is currently empty and which may be defined by the user.
The user may select one of the template definitions 21-24 by touching the touch sensitive surface 14 which is arranged above the display 13. Furthermore, the user may select one of the template definitions 21-24 by using buttons 25-27 of the mobile device 10. When the user selects template definition 24, an empty template definition will be displayed on the display 13 and the user is prompted to enter in a freehand drawing mode frame margin definitions on the display 13 to define a plurality of frames. The user may for example move the finger on the display 13 along a line 31 starting from point 32 and ending at point 33 as shown in Fig. 3. After the user has defined this margin line 31, two frames 1, 2 are automatically defined. The user may then define more frame margin definitions for subdividing the frames 1 or 2. Finally, the user indicates for example by pressing one of the buttons 25 to 27 that the user-defined template definition is completed.
After the user has selected one of the predefined template definitions 21-23 or has completed the user-defined template definition 24, the selected template definition is displayed on the display 13 of the mobile device 10. Fig. 4 shows this situation in case the user has selected template definition 21. Furthermore, in the first frame 1 of the template definition 21 image data of an environment 40 of the mobile device 10 is continuously displayed in frame 1 of the template definition 21. Frame 1 now acts as a view finder of the camera 11. By moving the mobile device the user can select the image sec- tion of the environment 40 which shall be captured. For capturing an image the user may use one of the buttons 25-27 or simply touch frame 1. After the user has triggered the capturing of the image, frame 1 is rendered with this image. Furthermore, the user is now prompted to take a second picture for the second frame 2 as shown in Fig. 5. Now the area of frame 2 acts as a viewfinder for the camera 11. In frame 1 the previously taken picture is displayed. Again, the user may select an appropriate image section of the environment 40 to take the picture for frame 2. After having taken the picture for frame 2, the user is prompted to take a picture for frame 3 while in frames 1 and 2 the previously taken pictures are displayed as shown in Fig. 6. The current image composition can be reviewed in real time on the display 13 as the user takes the images or photographs. This may help the user to select the next image section.
When the last picture for the last frame is taken, the mobile device 10 will combine the taken pictures and create a single image which is stored in memory 16. The images for the single frames 1-3 may be discarded to safe memory space in memory 16. The single image file may be transmitted via the radio frequency unit 18 and the antenna 19 via a multimedia service to an online service to be viewed by other people or friends.
A typical use case for making such a collage is for example when a user is on vacation and wants to share his experience with a friend by taking pictures and sending them as MMS. Instead of sending multiple image files, with the above- described method several images can be combined into one collage and the user only needs to pay for one transmission. Furthermore, for example if the user wants to sell some furniture at an auction website and the website allows only one image per advertisement to be uploaded, the user can take several pictures of the furniture and combine them into one file and load up the combined file.
While exemplary embodiments have been described above, various modifications may be implemented in other embodiments. For example, when taking the pictures for the plurality of frames 1- 3, the user may select to rearrange the order of the frames by dragging one on top of the other to switch their positions. The user may also touch one of the frames to re-capture the image that will be inserted into the selected frame. Furthermore, each of the images in the collage may be treated with different visual effects, for example convert to black/white, enhance saturation, blur, sharpening and so on. This may be user-selectable during taking the images or afterwards. Furthermore, according to further embodiments, not only still images may be captured and rendered into the frames, but also video sequences may be captured and associated with a frame. Furthermore, also audio data may be captured and associated with a frame. Additionally, images which have been taken before and which are stored in memory 16 may be rendered upon user command in any of the frames 1-3.
Finally, it is to be understood that all the embodiments described above are considered to be comprised by the present invention as it is defined by the appended claims.

Claims

1. Ά method for creating merged media data with a mobile device, the mobile device (10) comprising a media capturing device (11, 12) adapted to capture environmental information (40) as media data, the method comprising the steps of:
receiving a template definition (21-24) from a user, wherein the template definition (21-24) comprises a layout of a plurality of frames (1-3), wherein each frame (1-3) is adapted to represent media data,
for each of the plurality of frames (1-3) :
- reguesting the user to capture media data for the frame (1-3) with the media capturing device (11, 12), and
- storing the captured media data,
and
creating the merged media data by combining the stored media data of the plurality of frames (1-3) based on the template definition (21-24) .
2. The method according to claim 1, wherein the step of receiving the template definition (21-24) from the user comprises :
displaying a plurality of predefined template definitions (21-24) on a display (13) of the mobile device (10), and receiving an input from the user selecting one template definition (21-24) from the plurality of predefined template definitions (21-24 ) .
3. The method according to claim 1 or 2, wherein the step of receiving the template definition (21-24) from the user comprises :
displaying an empty template (24) on a display (13) of the mobile device (10), and receiving via a graphical user interface of the mobile device from the user at least one frame margin definition (31) for defining the plurality of frames (1-2) .
4. The method according to any one of the preceding claims, wherein the media data comprises at least one of video data and still image data.
5. The method according to claim 4, wherein the step of requesting the user to capture media data for one of the plurality of frames (1-3) comprises:
continuously displaying media data captured currently by the media capturing device (11, 12) in the frame (1-3), and receiving an input from the user triggering the currently captured media data to be stored as the media data of the frame (1-3) .
6. The method according to any one of the preceding claims, wherein the media data comprises audio data. . The method according to any one of the preceding claims, wherein the stored media data of the plurality of frames (1-3) is automatically deleted after the merged media data has been created.
8. The method according to any one of the preceding claims, wherein the media data comprises still image data, and wherein the merged media data comprises a single still image data file.
9. A mobile device, comprising:
a media capturing device (11, 12) adapted to capture environmental information (40) as media data, a display (13) for displaying output information to a user of the mobile device (10) ,
an input device (14, 25-27) for receiving input information from the user, and
a processing unit (15) adapted to
receive a template definition (21-24) from the user, wherein the template definition (21-24) comprises a layout of a plurality of frames (1-3) , wherein each frame (1-3) is adapted to represent media data,
for each of the plurality of frames (1-3) : request the user to capture media data for the frame (1-3) with the media capturing device (11, 12), and store the captured media data, and
create merged media data by combining the stored media data of the plurality of frames (1-3) based on the template definition (21-24) .
10. The mobile device according to claim 9, wherein the mobile device (10) is adapted to perform the method according to any one of claims 1-8.
11. The mobile device according to claim 9 or 10, wherein the mobile device (10) comprises at least one of the group comprising a mobile phone, a personal digital assistant, a digital camera, and a navigation system.
PCT/EP2010/007757 2010-12-17 2010-12-17 Method for creating merged media data with a mobile device WO2012079614A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2010/007757 WO2012079614A1 (en) 2010-12-17 2010-12-17 Method for creating merged media data with a mobile device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2010/007757 WO2012079614A1 (en) 2010-12-17 2010-12-17 Method for creating merged media data with a mobile device

Publications (1)

Publication Number Publication Date
WO2012079614A1 true WO2012079614A1 (en) 2012-06-21

Family

ID=44625023

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/007757 WO2012079614A1 (en) 2010-12-17 2010-12-17 Method for creating merged media data with a mobile device

Country Status (1)

Country Link
WO (1) WO2012079614A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9662845B2 (en) 2011-11-21 2017-05-30 Olympus Corporation Method for manufacturing optical element and device for manufacturing same

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050122412A1 (en) * 2002-04-17 2005-06-09 Seiko Epson Corporation Digital camera
EP1549051A1 (en) * 2002-09-30 2005-06-29 Matsushita Electric Industrial Co., Ltd. Portable telephone

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050122412A1 (en) * 2002-04-17 2005-06-09 Seiko Epson Corporation Digital camera
EP1549051A1 (en) * 2002-09-30 2005-06-29 Matsushita Electric Industrial Co., Ltd. Portable telephone

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Nokia E71 User Guide", 1 January 2008, article "Create Presentations", pages: 56, XP055011394 *
CHRIS WHITE: "App review: Diptic is a delightful camera app for arranging and combining photos", WWW.TUAW.COM, 15 July 2010 (2010-07-15), pages 1 - 2, XP055011026, Retrieved from the Internet <URL:http://www.tuaw.com/2010/07/15/app-review-diptic-is-a-delightful-camera-app-for-arranging-and/> [retrieved on 20111102] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9662845B2 (en) 2011-11-21 2017-05-30 Olympus Corporation Method for manufacturing optical element and device for manufacturing same

Similar Documents

Publication Publication Date Title
KR102013331B1 (en) Terminal device and method for synthesizing a dual image in device having a dual camera
CN110100251B (en) Apparatus, method, and computer-readable storage medium for processing document
EP3528140A1 (en) Picture processing method, device, electronic device and graphic user interface
EP3226537A1 (en) Mobile terminal and method for controlling the same
US9781355B2 (en) Mobile terminal and control method thereof for displaying image cluster differently in an image gallery mode
KR20090106755A (en) Method, Terminal for providing memo recording function and computer readable record-medium on which program for executing method thereof
US10048858B2 (en) Method and apparatus for swipe shift photo browsing
CN110572706B (en) Video screenshot method, terminal and computer-readable storage medium
TW201608385A (en) Methods and systems for media collaboration groups
CN112822394B (en) Display control method, display control device, electronic equipment and readable storage medium
CN112187626B (en) File processing method and device and electronic equipment
JP2005033346A (en) Apparatus and method for processing information, and software
TW201608398A (en) Methods and systems for image based searching
JP2015198387A (en) Moving image processing program, moving image processing method, and moving image processing device
US20230133148A1 (en) Movie creation method, non-transitory computer readable medium, and movie creation apparatus
WO2012079614A1 (en) Method for creating merged media data with a mobile device
CN112396675A (en) Image processing method, device and storage medium
US20210377454A1 (en) Capturing method and device
CN109963078B (en) Image processing system, image processing method, image processing apparatus, recording medium, and portable terminal
CN109474782B (en) Interface device for data editing, data editing method, and recording medium
JP2017046162A (en) Synthetic moving image creation system, synthetic moving image creation support system and synthetic moving image creation program
US20140153836A1 (en) Electronic device and image processing method
JP4427784B2 (en) Image processing apparatus and mobile phone equipped with image processing apparatus
JP2016095620A (en) Display device, server device, display system and control method thereof, control program, and storage medium
KR101072045B1 (en) Mbile communication device and image special effect service method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10803569

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10803569

Country of ref document: EP

Kind code of ref document: A1