GB2525035A - Technique for gathering and combining digital images from multiple sources as video - Google Patents

Technique for gathering and combining digital images from multiple sources as video Download PDF

Info

Publication number
GB2525035A
GB2525035A GB1406540.3A GB201406540A GB2525035A GB 2525035 A GB2525035 A GB 2525035A GB 201406540 A GB201406540 A GB 201406540A GB 2525035 A GB2525035 A GB 2525035A
Authority
GB
United Kingdom
Prior art keywords
entities
image
video
optionally
arrangement according
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1406540.3A
Other versions
GB201406540D0 (en
Inventor
Antti Autioniemi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
YOULAPSE Oy
Original Assignee
YOULAPSE Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by YOULAPSE Oy filed Critical YOULAPSE Oy
Priority to GB1406540.3A priority Critical patent/GB2525035A/en
Publication of GB201406540D0 publication Critical patent/GB201406540D0/en
Publication of GB2525035A publication Critical patent/GB2525035A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2353Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234336Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by media transcoding, e.g. video is transformed into a slideshow of still pictures or audio is converted into text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

A computing entity 102 configured to receive images 110 from a plurality of electronic devices 112 and combine the images into a video representation according to the image metadata. Preferably, metadata representing the date, time, location, owner, device or title of the image is used to sort the images to for assimilation into a video. Audio in the form of digital music files or audio samples may also be incorporated into the video. The electronic devices may be mobile phones, smart phones, tablets, phablets, digital cameras, laptops or desktop computers and may pre-process the images. The computing entity is preferably a remote server which may process the received images by applying filters, effects, format conversions, or other operations. Optionally, the computing entity may be one of the electronic devices. The images may be still images, graphics or photographs, or they may be sequences of images in the form of videos. The described embodiment automatically creates video summaries of events by collecting images or videos from a number of different users and devices and collating them according to information such as the time and location at which the media was captured.

Description

TECHNIQUE FOR GATHERING AND COMBINING DIGITAL
IMAGES FROM MULTIPLE SOURCES AS VIDEO
FIELD OF THE INVENTION
Generally the present invention concerns gathering digital content from various sources and creating a video of the gathered content. Particularly, however not exclusively, the invention pertains to a method for creating a video representation of images gathered from various users and devices.
BACKGROUND
Recently the development of sinartphone cameras and digital cameras has led to an increasing popularity in creating aphical digital content. The ability to cany a digital camera to virtually anywhere allows users to more freely express their creativity and take a lot of pictures and videos of any-thing ranging from vast gatherings such as festivals to ordinary everyday life situations such as seasonal changes in nature.
Nowadays, people also tend to be very collective in sharing and using con-tent and being part of a jointly created content is often felt as a part of identity and emotional attaclunent. However, going through a massive se-lection of unsorted photos, images, videos and even audio from different dates, locations and devices is arduous and inefficient. Hence, in the ab-sence of a better use a large part of user created content is often forgotten and left unused and unorganized in storage folders and such, particularly since so much content is produced and managing the content with rever-ence is needlessly time-consuming for users.
Collecting idle and unused, or ally, content froni a plurality of users is possible with today's systems but it is often arranged so that the users still have to proactively choose and pick the content they wish to share with a system such as a blogging platfonn, social media or an image sharing or saving system. Moreover, these systems aren't able to take advantage, ar-range and merge multimedia content in a way other than how the users manually arrange, categorize and wish to present the content. Again evi-dently, individual users are left with all the managing and sharing of their content, and even then, they aren't able to create and merge content easily with other users, who would possess similar content, but with whom they aren't in touch with. For example, users who have attended and created content of a happening, such as people attending a festival who take pho-tos and video, aren't usually in touch with each other and are so unable to crate content together, and for this reason end up just storing content or at best using some content for their own purposes, such as posting a number of photos on a social media system.
Hence, obviously creating more cohesive and meaningful content from a plurality of user created multimedia content from various users has been poorly solved, if at all.
SUMMARY OF THE INVENTION
The objective of the embodiment of the present invention is to at least al- leviate one or more of the aforesaid drawbacks evident in the prior art ar-rangements particularly in the context of utilizing various image sources to create video content. The objective is generally achieved with an ar- rangeinent and a method in accordance with the present invention by hay- ing an arrangement capable of connecting to a plurality of electronic de-vices conipnsing image entities and a method to collect said image entities and combine them into a video representation.
One advantageous features of the present invention is that it allows for collecting content, such as pictures, photographs and other image files from a plurality of devices and to combine such content into a video repre-sentation advantageously, inter alia, according to date, location and/or user or device information. This way users may for example create lots of un-ages and/or videos on their electronic devices and offer them to be used by the arrangement to create a number of coherent video representations comprising content created by the users on different electronic devices in various locations and instances of time. For example, a number of people participating in an event or happening creating digital content such as digi-tal images and video by e.g. their mobile devices may offer their content to be collected and combined into a video representation of said event or happening, wherein the image and/or video content constituting the video representation are optionally sequentially arranged according to e.g. loca-tion or time data information associated with said images and/or videos.
One of the advantageous features of the present invention is that it allows for creating a video representation, particularly a time lapse representa-tion, automatically by taking into account the amount and/or the nature and/or format of the content and combining the content, such as images, with suitable audio according to the amount and/or nature of the images.
In accordance with one aspect of the present invention an electronic ar-rangement, optionally a number of servers, comprising: -a computing entity configured to receive image entities from a plurality of electronic devices, optionally mobile terminals, and configured to pro- cess said image entities, the computing entity being specifically config-ured to: I5 -obtain a plurality of image entities from said plurality of electronic devices, and -combine the obtained image entities into a video representation according to the metadata associated with the image entities, op- tionally date and/or time data, and/or the source of the image enti-ties.
According to an exemplary embodiment of the present invention the elec- tronic arrangement comprises one or more electronic devices, such as ter-minal devices, optionally mobile terminal devices or smartphones', tablet computers, phablet computers, desktop computers or servers. According to an exemplary embodiment of the present invention the devices may be used by different users, optionally essentially separately from each other.
According to an exemplary embodiment of the present invention the elec-tronic aiTangement is configured to receive, process and/or combine image entities into a video representation by using positioning or geolocation in- fonnation, obtained from the electronic devices. Such positioning infor-mation may be acquired by the electronic devices by utilizing techniques such as: Global Positioning System (UPS), other satellite navigation sys-tems, Wi-Fi-based positioning system (WPS), hybrid positioning system, and/or other positioning system.
According to an exemplary embodiment of the present invention the corn- pitting entity may be configured to arrange the image entities by the loca- tion information such that the image entities are sequentially ordered ac-cording to the proximities of their capturing device locations, optionally without using the image entity metadata infonnation. Optionally the loca-tion information obtained directly from the electronic devices may be used together with the associated image entity metadata, optionally such that ei-ther is preferred over the other. For example, the location data obtained from the electronic device may be used to first arrange the image entities sequentially and any metadata information type such as time data or loca-tion data provided with the image entities may be used to further on (re)arrange the ordering of said entities. Optionally the computing entity may be configured to add the location infonnation received from the elec-tronic devices to the image entity metadata.
According to an exemplary embodiment of the present invention the posi-tioning information obtained from the electronic devices may be used for the video representation to establish visualization, such as presenting loca-tion infonnation in the video representation. The positioning data may be Thither on used for other purposes i.a. relating to the construction of the video representation.
According to an exemplary embodiment of the present invention the elec- tronic devices may comprise image entities, video entities and/or audio en-tities. According to an exemplary embodiment of the present invention the devices may be used to create the image entities, such as by taking photo-graphs, recording sound, and/or creating video.
According to an exemplary embodiment of the present invention the im- age entities of the arrangement may comprise or be at least somehow as-sociated with metadata, which may be embedded to the image entities, such as written to an image entity code, or otherwise added or linked to the image entities, such as an accompanying sidecar file or a tag ifie.
Metadata preferably comprises at least one information type of the follow-ing: creation date and/or time, creation location, ownership, what device created the entity, keywords, classifications, size, title and/or copyrights.
According to an exemplary embodiment of the present invention the video representation comprises or consists of at least two or more image entities.
According to an exemplary embodiment of the present invention the video representation comprises a number of image entities and a number of vid-eo files. According to an exemplary embodiment of the present invention the video representation comprises only a number of video files. Accord- ing to an exemplary embodiment of the present invention the video repre- sentation comprises image entities and a number of audio entities. Accord- ing to another embodiment of the present invention the video representa-tion comprises image entities, video entities and audio entities.
According to an exemplary embodiment of the present invention the video representation is a time-lapse or other digital video file.
According to an exemplary embodiment of the present invention the video representation may comprise a representation of the selected image enti-ties arranged essentially sequentially. The sequence may be achieved by arranging image entities according to metadata information such as for ex- ample time or location data, so that image entities may be in a chronologi- cal sequence or in a location-according sequence. The sequence may coin-prise combining a plurality of mnetadata infonnation types as basis for achieving certain preferred sequence, optionally such that the inetadata in-formation types have different priorities over each other enabling the computing entity to arrange the image entities into a video representation according to the priorities of the metadata information types and the avail-ability of metadata infonnation types. For example, in the absence of a inetadata information type the next in priority may be used. Additionally the computing entity may compnse image entities into a video representa- tion only if they have required metadata information such as location in-formation for example for ensuring that the image entities used for video representation are desired.
According to an exemplary einbodimnent of the present invention the frame rate, the frame frequency or image entity frequency, i.e., the pace at which the sequential image entities are gone through, may be set automatically for example optionally substantially from 5 image entities per second to 6, 8, 10, 12, 14 or 16 image entities per second or to another number of im- age entities per second. According to an exemplary embodiment of the in- vention the frame rate is set automatically according to the amount of se- lected image entities used in the video representation, such as that for ex- ample an increase in the amount of image entities used in the video repre-sentation increases the frame rate or that increase in the amount of image S entities used in the video representation decreases the frame rate. Option-ally the frame rate may be set according to a user input.
According to an exemplary embodiment of the present invention the im- age entities preferably comprise digital image files, such as pictures, draw-ings, photographs, still images, layered images and/or other graphics files.
The digital image files may be vector and/or raster images. According to an exemplary embodiment the image entities used for the video represen-tation consist of essentially single file format. According to an exemplary embodiment the image entities used for the video representation comprise essentially a plurality of different file formats. According to an exemplary embodiment of the present invention an image entity may comprise a phi-rality of digital image files, such as pictures, drawings, photographs, still images, layered images and/or other graphics files, optionally arranged in a sequence and/or as a video.
According to an exemplary embodiment of the present invention the im-age entities may be from and/or created by a number of different devices.
According to an exemplary embodiment of the present invention a number of the image entities may be created by an electronic device itself either automatically or responsive to user input via a camera feature. According to an exemplary embodimnnent of the present invention a number of the im- age entities may have been created outside the electronic devices and uti- lized by the devices or retrieved on the devices. According to an exempla-ry embodiment of the present invention the image entities may comprise a combination of image entities produced by the electronic devices and im-age entities acquired externally, optionally stored on a remote device or transferred to the amTangement from an external source.
According to an exemplary embodiment of the present invention the im- age entities are stored in the electronic devices. According to an exempla-my embodiment of the present invention the image entities are stored in a remote cloud computing entity, such as a remote server, wherefrom they may be accessible and displayable via a plurality of different devices, such as mobile and desktop devices and other servers.
According to an exemplary embodiment of the present invention the video representation may comprise a number of audio entities, such as music, optionally in an even time signature such as 4/4 or 2/4. According to an exemplary embodiment of the present invention the audio entities may be chosen by the computing entity according to the image entities for exam-pie according to the amount of selected image entities and/or intended length of the video representation. According to an exemplary embodi-ment of the present invention the audio used in the video representation may be chosen or be at least suggested by a number of users, optionally by users of the electronic devices. According to an exemplary embodiment of the present invention the audio entities used in the video representation may be added before the video representation is produced andlor after the video representation is produced.
According to an exemplary embodiment of the present invention the audio entities may comprise a number of digital music files or e.g. audio sam-ples constituting optionally multi-channel audio track.
According to an exemplary embodimeilt of the present invention the audio entities may be comprised in the electronic devices, in a server or in the arrangement's memory entity. Additionally the audio entities may be cre- ated by a number of different electronic devices either automatically or re-sponsive to user input via an audio recording feature or a video camera feature.
According to an exemplary embodiment of the present invention addition-al video entities may also be optionally used. The video entities may be comprised in the electronic devices, in a server or in the arrangement's memory entity. The video entities may be created by a number of different electronic devices either automatically or responsive to user input via a video camera feature.
According to an exemplary embodiment of the present invention the com-puting entity is preferably used to combine image entities and optionally other entities such as video and audio entities to produce a video represen- tation. Additionally the computing entity may be able to process image en- tities, video entities andlor audio entities. The processing techniques com-prise inter alia format conversion, enhancement, restoration, compression, editing, addition of effects, addition of text or other graphics, addition of S filter(s), scaling, layering, change of resolution, orienting, noise reduction, image slicing, sharpening or softening, size alteration, cropping, fitting, inpainting, perspective control, lens correction, digital compositing, changing color depth, changing contrast, adjusting color, warping, bright-ening, rendering and/or (re)arranging.
According to an exemplary embodiment of the present invention at least a part of image entity, video entity and/or audio entity processing may be done in the electronic devices before being collected by the anangement.
According to an embodiment of the present invention the electronic devic-es may control what content such as which image entities they allow (and vice versa what content they won't allow) to be collected and/or utilized by the arrangement.
According to an exemplary embodiment of the present invention the ar- rangement comprises allocating the computing entity tasks, such as col-lecting, processing and/or combining the image entities and other optional entities into a video representation, to a plurality of electronic devices for example for carrying out the method phases parallel for different parts of content.
In accordance with one aspect of the present invention a method for creat-ing a video representation through an electronic arrangement, comnpnsing: -obtaining a plurality of image entities from a plurality of electronic de-vices, and -combining the obtained imnnage entities into a video representation accord-ing to the metadata associated with the image entities, optionally date and/or time data, and/or the source of the image entities.
According to an exemplary embodiment of the present invention the im- age entities and other optional entities are combined as a video representa-tion sequentially according to their metadata. The metadata may comprise many types of information as also presented hereinbefore and the various information types may be categorized and/or prioritized. The different se-quences of the video representation may optionally be achieved according to said metadata information type priorities.
In accordance with one aspect of the present invention a computer pro-gram product embodied in a non-transitory computer readable medium, comprising computer code for causing the computer to execute: -obtaining a plurality of image entities from a plurality of elechonic de-vices, and -combining the obtained image entities into a video representation accord-ing to the metadata associated with the image entities, optionally date and/or time data, and/or the source of the image entities.
According to an embodiment of the present invention the computer pro-grain product may be offered as a software as a service (SaaS).
Different considerations concerning the various embodiments of the elec-lionic arrangement may be flexibly applied to the emnbodimnnents of the method mutatis mutandis and vice versa, as being appreciated by a skilled person.
As briefly reviewed hereinbefore, the utility of the different aspects of the present invention arises from a plurality of issues depending on each par-ticular embodiment.
The expression "a number of' may herein refer to any positive integer starting from one (1). The expression "a plurality of' may refer to any positive integer starting from two (2), respectively.
The term "exemplary" refers herein to an example or an example-like fea-ture, not to the sole or only preferable option.
Different embodiments of the present invention are also disclosed in the attached dependent claims.
BRIEF DESCRIPTION OF THE RELATED DRAWiNGS
Next, the embodiments of the present invention are more closely reviewed S with reference to the attached drawings, wherein Fig. 1 illustrates an embodiment of the arrangement in accordance with the present invention.
Fig. 2 is a flow diagram of one embodiment of the method for creating a video representation through an electronic arrangement in accordance with the present invention.
Fig. 3 illustrates an embodiment of a video representation of said image entities in accordance with the present invention.
DETAILED DESCRIPTION OF THE EMBODIMENTS
With reference to Figure 1, an embodiment of the electronic arrangement of the present invention is illustrated.
The electronic arrangement 100 essentially comprises a computing entity 102, a transceiver 104, a memory entity 106 and a user interface 108. The electronic arrangement 100 is further on configure to receive and/or col-lect image entities 110 from electronic devices 112 via communications networks and/or connections 114. Further on, the arrangement 100 may be configured to receive also other content, such as audio and/or video enti-ties from the electronic devices 112 via the communications networks and/or connections 114.
The electronic arrangement 100 may comprise or constitute a number of terniinal devices, optionally mobile terminal devices or sniartphones', tablet computers, phablets, desktop computers, and/or server entities such as servers in a cloud or other remote servers. The arrangement 100 may comprise any of the electronic devices 112 comprising and/or creat- ing/capturing image entities 110, or a separate device, optionally essential-ly autonomically or automatically functioning device such as a remote server entity.
The computing entity 102 is configured to at least receive image entities 110, process image entities 110, store image entities 110 and combine im-age entities 1 I 0 into a video representation, optionally with other content such as audio entities and/or video entities. The computing entity 102 3 comprises, e.g. at least one processing/controlling unit such as a micropro-cessor, a digital signal processor (DSP), a digital sigilal controller (DSC), a micro-controller or programmable logic chip(s), optionally comprising a plurality of co-operating or parallel (sub-)units.
The computing entity 102 is further on connected or integrated with a memory entity 106, which may be divided between one or more physical memory chips and/or cards. The memory entity 106 is used to store image entities 110 and other content used to create a video representation as well as optionally the video represelitation itself The memory entity 106 may further on comprise necessary code, e.g. in a fonn of a computer pro- grain/application, for enabling the control and operation of the arrange-ment 100 and the user interface 108 of the arrangement 100, and provision of the related control data. The memory entity 106 may comprise e.g. RON! (read only memory) or RAM-type (random access memoiy) imple-mentations as disk storage or flash storage. The memory entity 106 may further comprise an advantageously detachable memory card/stick, a flop-py disc, an optical disc, such as a CD-ROM, or a fixed/removable hard drive.
The transceiver 104 is used at least to collect image entities 110 from the electronic devices 112 and other devices. The transceiver 104 preferably comprises a transmitter entity and a receiver entity, either as integrated or as separate essentially intercoimected entities. Optioiially, the arraligernent comprises at least a receiver entity. The transceiver 104 comiects the arrangement 1 00 with the devices 1 1 2 with preferably duplex communica-tion coimections 114 via a telecommunications network, such as wide area iietwork (WAN) and/or local area network (LAN).
The user interface 108 is device-dependent and as such may embody a graphical user interface (GUI), such as those of mobile devices or desktop devices, or command-line interface e.g. in case of servers. The user inter- face 108 may be used to give commands and control the software pro-gram. The user interface 108 maybe configured to visualize, or present as in textual, different data elements, status information, control features, user instructions, user input indicators, etc. to the user via for example a dis-play screen. Additionally, the user interface 108 may be used to control the arrangement IOU such that for example user control in initiating func-tions such as the action to create, collect and/or process image entities 110 and/or to create a video representation of image entities 110. This allows for e.g. user involvement in choosing content, arranging content, deter- mining metadata priorities and/or which metadata is used, editing any con-tent including the video representation, and/or sharing content with other devices.
The image entities 110 preferably comprise digital image files, such as pictures, drawings, photographs, still images, layered images and/or other graphics files. The digital image files may be vector and/or raster images.
An image entity 110 may optionally additionally comprise a plurality of the abovementioned graphics files, optionally arranged as video or other-wise sequentially.
The image entities 110 may be stored in the arrangement's 100 memory entity 106, in the electronic devices 112 or in a number of other devices such as remote servers (not otherwise used to create image entities 110), wherefrom the image entities 110 may be accessible and displayable via the electronic devices 112 and the arrangement 100.
The image entities 110 may be originally from and/or created by a number of different devices, such as from the various different electronic devices 112. An image entity 110 may be created by an electronic device 112 it-self either automatically or responsive to user input via a camera, image creating and/or image editing/processing feature. A number of the image entities 110 may have been created outside the electronic devices 112 and utilized by the arrangement 100 or retrieved on the arrangement 100 to be used by the arrangement 100 to create the video representation, for in-stance. The image entities 110 may also coinpnse a combination of image entities 110 produced by the electronic devices 112 and image entities 110 acquired externally, optionally stored on a remote device or transferred to the arrangement 100 from an external source.
The image entities 110 may comprise a number of file formats. The com-puting entity 102 may be configured to convert file formats so that they are suitable to be processed and combined into a video representation.
The image entities 110 comprise also metadata, which metadata is used for creating the video representation. The metadata may be embedded to the image entities 110, such as written to an image entity 110 code, or oth-erwise added to the image entities 110, such as an accompanying sidecar file or a tag file. Metadata preferably comprises at least one information type of the following: creation date and/or time, creation location, owner-ship, what device created the entity, keywords, classifications, size, title and/or copyrights.
Additionally metadata may be comprised and/or created according to a standard type such as exchangeable image file format (Exif). Other forms include Dublin Core Schema, International Press Telecommiuiications Council Information Interchange Model UPTC-IIM), IPTC Core, IPTCT Extension, Extensible Metadata Platform (XMP) and Picture Licensing Universal System (PLUS).
The alTangement 100 may be configured to receive, in addition to or in-stead of hnage entity 110 inetadata-based location data, positioning data from the electronic devices 112, which data may be used to arrange the image entities 110 into a video representation. Such positioning data may be acquired by the electronic devices 112 by utilizing techniques such as: GPS, other satellite navigation systems, WPS, hybrid positioning system, and/or other positioning system.
The arrangement 100 may receive, store and/or utilize other content such as video entities and/or audio entities. Said entities may be acquired fioni the electronic devices 112. The video and audio entities may also com-pnse metadata similar to the image entities 110.
The invention may be embodied as a software program product that may incorporate one or more electronic devices 112. The software program product may be as SaaS. The software program product may also incorpo- rate allocating processing of image entities 110, video entities and/or au-dio entities to one or more devices 112, optionally simultaneously. The software program product may also incorporate allocating and dividing computing tasks related to i.a. creatthg the video representation to one or more devices 112. Optionally the invention may be facilitated via a browser or similar software wherein the software program product is ex- 3 ternal to the arrangement 100 but remotely accessible and usable together with a user interface 108. The software program product may include and/or be comprised e.g. in a cloud server or a remote terminal or server.
With reference to figure 2, flow diagram of one embodiment of a method for creating a video representation through an electronic arrangement in accordance with the present invention is shown.
At 202, refelTed to as the stall-up phase, the arrangement executing the method is at its initial state. At this initial phase the computing entity is ready to detect and act on user input via the graphical user inteiface. op-tionally the metadata settings, such as which metadata information types are preferred anxUor priorities among the different metadata information types, md./or utilization of electronic device positioning data may be de-tenniiied.
At 204, image entities are obtained from one or more electronic devices.
Additioiially coilteilt such as video and audio entities may be also obtained from the electronic devices, a database on a remote server and/or from the arrangement's own memory entity.
Additionally, the users of the electronic devices may control what they wish to share, i.e., what conteilt they allow to be collected for the video represeiltation.
Some image entities may be already combined in the devices at this phase, optionally as video. For example, image entities created substantially se- quentially iii a burst mode, or otherwise so that any of their metadata in-formation types are close to each other, such as locations substantially close to each other, may be combined as video already in the electronic device before being obtained by the anangement.
Additionally, positioning data from a number of electronic devices may be acquired at this phase, optionally together with the image and/or other en-tities. Said positioning data may be used to essentially instantaneously combine the image and/or other entities together. Optionally the position- ing data may be used to categorize or otherwise associate the image enti-ties, optionally according to the electronic device proximities to each other for example such that the closer the image and/or other entities capturing electronic devices are to each other, and/or to the arrangement, the closer said image and/or other entities are associated together e.g. in the video representation sequences. The electronic device locations and/or mutual proximities/distances of each other are preferably measured at the time the content is created allowing the arrangement or the electronic device cap-turing the content to associate the positioning information with the image entities, optionally as metadata or as separate data sent from the electronic device to the arrangement.
At 206, the image entities and oilier optional entities are processed. Such processing may comprise inter alia format conversion, enhancement, res-toration, compression, editing, addition of effects, addition of text or other graphics, addition of filter(s), scaling, layering, change of resolution, ori- enting, noise reduction, image slicing, sharpening or softening, size altera- tion, cropping, fitting, inpainting, perspective control, lens correction, dig-ital coinpositing, changing color depth, changing contrast, adjusting color, warping, brightening, rendering and/or (re)arranging.
Optionally additionally the file formats are converted so that they are mu- tually compatible and/or so that they can be used to produce the video rep- resentation, optionally such that the entity fonnats support and are trans-latable into the video representation file fonnat.
One aspect of carrying out the processing is also to make the image entity transitions more fluent inside the video representation, optionally by har- monizing the image entities at least in reference to one or more of the pre-ceding and succeeding image entities of any image entity in a sequence.
The device configuration related image parameters such as focal length, exposure, resolution, colors, etc. may lead to very different looking imag-es. To avoid hard-to-follow and out-of focus video representations the processing may substantially unify said parameters so that the sequential image entities constitute a more coherent set. Using different filter for ex-ample may be used to adjust colors and brightness and to sharpen images, etc. Optionally additionally at least part of the image entity, video entity and/or audio entity processing may be done in the electronic devices be-fore being collected by the arrangement.
At 208, the image and other optional entities are combined into a video representation, optionally sequentially according to their metadata, and/or at least partly to the positioning data. The action to combine image and other optional entities into a video representation may be initiated substan- tially automatically optionally directly after the computing entity has ob-tained a selection of image entities and processed said image entities, and/or according to a user input. The selection of images may be deter- mined by having a preset to collect a number of image entities and/or oth- er optional entities, the preset being optionally predetermined and change-able. The selection may be also dynamic so that it takes into account the essentially available image and/or other optional entities in the electronic devices, such that the selection is created of the image and/or other op-tional entities that the arrangement is able to collect and use according to metadata parameters. Additionally optionally only the image and/or other optional entities with suitable metadata may be used.
The sequential order may be for example chronological or location-based.
Further on, any metadata information may be used to either construct the sequences of the content constituting the video representation or to visual-ize or otherwise add content to the representation. For example, any data type may be visualized as optionally textually about the location, user, de- vice, time and/or device of the content on the graphical video representa-tion.
Optionally additionally a user may be asked to confirm that the image and other optional entities are combined into a video representation essentially before the video representation is created. The confirmation may also comprise adding or removing image and other optional entities that are used for the video representation, processing said entities, and/or present- ing a user with a preview of the video representation according to the im-age entity and other optional entity selection. Optionally user may change the metadata and/or other positioning data preferences constituting the se-quence of the video representation, for example (re)ananging the content chronologically or location-wise.
The user may be also inquired of whether audio entities are added to the video representation and/or what kind of audio entities are used. Optional-ly a nmnber of audio entities may be added to the video automatically, such as image entities received by the arrangement after the video repre-sentation has been created.
At 212, referred to as the end phase of the method, the user may be pre-sented with the video representation and/or the video representation may be transferred or saved to a location, optionally according to user input.
The video representation may be further on processed and edited. Option-ally the video representation may be sent to the users' electronic devices.
With reference to Figure 3, a video representation 304 comprising a num-ber of image entities 302 and an audio entity 306 is presented.
The video representation 304 comprises preferably at least two or more image entities 302 (the only one pointed out as an example of one of the many image entities 302) arranged essentially sequentially according to their metadata, for example chronologically according to time/date infor- mation (as illustrated with the time axle 308) comprised in the image enti- ties 302. Optionally the image entities 302 may be arranged essentially se- quentially according to any other inetadata information type, such as ac- cording to location infonnation. The alTangelnent may utilize the position-ing infonnation of the electronic devices essentially at the time the image entities 302 are created, optionally together with the metadata.
The metadata infonnation comprises different types of information, such as creation date and/or time, creation location, ownership, what or what type of device created the entity, keywords, classifications, size, title and/or copyrights of the content, which infonnation types may have dif-ferent priorities in relation to each other such that for example the image entities 302 are essentially preferably and/or primarily arranged chrono-logically or according to location data. In the absence of a preferred metadata information type the next metadata information type in priority is used for arranging the content. The metadata information type priorities may have presets and/or they may be set and/or changed according to user preferences, optionally before and/or after the image entities 302 and other optional entities are combined into a video representation 304.
Additionally any metadata thformation type and/or the electronic device positioning data may be used, in addition to constituting the sequential structure of the video representation 304, to visualize graphically and/or textually information, optionally about the event, happening, location, time and/or date, and/or user essentially on the video representation 304.
Additionally, the video representation 304 may comprise only image enti-ties 302, a combination of image entities 302 and audio entities 306, a combination of image entities 302, audio entities 306 and video entities, only video entities, and/or video entities and audio entities 306. The video representation 304 may comprise a time-lapse or other digital video.
The optional video entities may comprise a number of digital video files.
The video entities may be created by a number of different electronic de-vices either automatically or responsive to user input via a video camera feature. Optionally additionally the video entities may be created by the electronic devices by combining a plurality of image entities 302. The video entities may be comprised in the electronic devices, in a server or in the arrangement's memory entity.
The video representation 304 may comprise, in addition to the image enti- ties 302, audio entities 306 and/or video entities obtained from the elec-tronic devices, other image entities 302 such as blank, different colored images and/or predetermined images in between, before and/or after said image entities 302 and/or video entities. Said other image entities 302 may be chosen by a user and/or they may be added to the video representation 304 automatically according to predefined logic.
The frame rate of the video representation 304 may be set optionally au-tomatically, for example, optionally substantially to 5 frames per second or to 6, 8, 10, 12 or 14 frames per second or to more image entities 302 per second or to less image entities 302 per second. Optionally, the frame rate may be set automatically according to the number of selected image entities 302 and/or video entities used in the video representation 304, such as that for example an increase in the amount of image entities 302 used in the video representation 304 increases the frame rate or that in-crease in the amount of image entities 302 used in the video representation 304 decreases the frame rate. Optionally, the frame rate may be set ac-cording to a user input. Optionally additionally the frame rate may be set according to the audio entities 306 for example according to the nature of the audio entities 306 i.e., the type or time signature of the audio content.
The video representation 304 as well as the other optional video entities are preferably in a digital format, the format being optionally chosen by a user.
The audio entities 306 may comprise a number of digital music files or e.g. audio samples constituting optionally multi-channel audio track. The audio entity 306 is preferably music in an even time signature such as 4/4 or 2/4. Alternatively or additionally, the audio entity 306 may include am- bient sounds or noises. The audio entities 306 comprised in the video rep-resentation 304 may be chosen by a user or the audio entity 306 may be optionally chosen by the computing entity for example according to the amount of selected image entities 302 and/or length of the video represen-tation 304, and/or according to a predetermined choices of audio entities 306, such as from a list of audio files, optionally as a "playlist". The audio entity 306 comprised in the video representation 304 may be added before the video representation 304 is produced and/or after the video representa-tion 304 is produced.
The audio entities 306 may be comprised in the electronic devices, in a server or in the arrangement's memory entity. Additionally the audio enti-ties 306 may be created by a number of different electronic devices either automatically or responsive to user input via an audio recording feature or a video camera feature.
Selecting adequate audio entities 306 for the video representation 304 comprises at least leaving out the most complex and/or rhythmically com-plex pieces as they result in a much less cohesive and complex outcome and aren't suitable with a fixed frame rate. Suitable audio entities 306 that lead to a more seamless video representation 304 comprise music in a simple time signature with less harmonic complexity and irregularity in accentuation.
The scope of the invention is determined by the attached claims together 3 with the equivalents thereof The skilled persons will again appreciate the fact that the disclosed embodiments were constructed for illustrative pur-poses only, and the innovative fulcrum reviewed herein will cover further embodiments, embodiment combinations, variations and equivalents that better suit each particular use case of the invention. I0

Claims (20)

  1. Claims I. An electronic arrangement (100), optionally a number of servers, comprising: -a computing entity (102) configured to receive image entities (110, 302) from a plurality of electronic devices (112), optionally mobile terminals, and configured to process said image entities (110, 302), the computing entity (102) being specifically configured to: -obtain a plurality of image entities (110, 302) from said plurality of electronic devices (112), and -combine the obtained image entities (110, 302) into a video repre- sentation according to the metadata associated with the image enti-ties (110, 302), optionally date and/or time data, and/or the source of the image entities (110, 302).
  2. 2. The arrangement according to any preceding claim, wherein a number of audio entities (306) are combined with the image entities (110, 302) to create a video representation (304).
  3. 3. The arrangement according to any preceding claim, wherein the metadata comprises at least one information type of the following: crea-tion date and/or time, creation location, ownership, what or what type of device created the entity, keywords, classifications, size, title and/or copy-rights.
  4. 4. The arrangement according to any preceding claim, wherein loca-tion data associated with image entities, optionally as metadata, may be used to at least partly establish the video representation, optionally to de-tennine the mutual order of image entities in the video representation.
  5. 5. The arrangement according to any preceding claim, wherein the video representation (304) comprises a video file incorporating said image entities (110, 302) sequentially ordered. on
  6. 6. The arrangement according to any preceding claim, wherein the frame rate of the video representation (304) is substantially about 5 frames per sec or 8, 10, 12, 14 frames per second.
  7. 7. The arrangement according to any preceding claim, wherein the computing entity (102) is a remote server, such as one or more servers in a cloud.
  8. 8. The arrangement according to any preceding claim, wherein the computing entity (102) is one of the electronic devices.
  9. 9. The arrangement according to any preceding claim, wherein the computing entity's (102) processing of image entities (110, 302) compris- es at least one from the list of: fonnat conversion, enhancement, restora-tion, compression, editing, addition of effects, addition of text or other graphics, addition of filter(s), scaling, layering, change of resolution, ori- enting, noise reduction, image slicing, sharpening or softening, size altera- tion, cropping, fitting, inpainting, perspective control, lens correction, dig-ital compositing, changing color depth, changing contrast, adjusting color, warping, brightening, rendering and/or (re)arranging.
  10. 10. The arrangement according to any preceding claim, wherein the video representation (304) of said image entities (110, 302) is a digital video file.
  11. 11. The arrangement according to any preceding claim, wherein the video representation (304) of said image (110, 302) entities is a time-lapse.
  12. 12. The arrangement according to any preceding claim, wherein the image entities (110, 302) comprise digital image files, such as vector or raster fonnat pictures, photographs, layered ininages, still image and/or other graphics files.
  13. 13. The arrangement according to any preceding claim, wherein an im- age entity (110, 302) comprises a number of digital image files, still imag-es, photographs, and/or other graphics files, optionally as video.
  14. 14. The arrangement according to any preceding claim, wherein the audio entity (306) comprises a number of digital music files or e.g. audio samples constituting optionally multi-channel audio track.
  15. S IS. The arrangement according to any preceding claim, wherein the electronic devices (112) comprise one or more mobile terminals, optional-ly smartphones.
  16. 16. The arrangement according to any preceding claim, wherein the electronic devices (112) comprise one or more tablets and/or phablets.
  17. 17. The arrangement according to any preceding claim, wherein the electronic devices (112) comprise one or more desktop computers, laptop computers, or digital cameras, optionally add-on, time-lapse, compact, DSLR or high-defmition personal cameras.
  18. 18. The arrangement according to any preceding claim, wherein the electronic devices (112) preprocess image entities (110, 302) before the computing entity collects the image entities.
  19. 19. A method for creating a video representation through an electronic arrangement, comprising: -obtaining a plurality of image entities (204) from a plurality of electronic devices, and -combining the obtained image entities into a video representation (208) according to the metadata associated with the image entities, optionally date and/or time data, and/or the source of the image entities.
  20. 20. A computer program product embodied in a non-transitory comput-er readable medium, coinpnsing computer code for causing the computer to execute: -obtaining a plurality of image entities from a plurality of electronic de-vices, and -combining the obtained image entities into a video representation accord-ing to the metadata associated with the image entities, optionally date and/or time data, and/or the source of the image entities.
GB1406540.3A 2014-04-11 2014-04-11 Technique for gathering and combining digital images from multiple sources as video Withdrawn GB2525035A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1406540.3A GB2525035A (en) 2014-04-11 2014-04-11 Technique for gathering and combining digital images from multiple sources as video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1406540.3A GB2525035A (en) 2014-04-11 2014-04-11 Technique for gathering and combining digital images from multiple sources as video

Publications (2)

Publication Number Publication Date
GB201406540D0 GB201406540D0 (en) 2014-05-28
GB2525035A true GB2525035A (en) 2015-10-14

Family

ID=50844858

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1406540.3A Withdrawn GB2525035A (en) 2014-04-11 2014-04-11 Technique for gathering and combining digital images from multiple sources as video

Country Status (1)

Country Link
GB (1) GB2525035A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2553659A (en) * 2017-07-21 2018-03-14 Weheartdigital Ltd A System for creating an audio-visual recording of an event

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1378910A2 (en) * 2002-06-25 2004-01-07 Eastman Kodak Company Software and system for customizing a presentation of digital images
GB2419768A (en) * 2004-11-01 2006-05-03 Atr Advanced Telecomm Res Inst Creating a video sequence of photographic images in accordance with metadata relating to the images
WO2010077772A1 (en) * 2008-12-17 2010-07-08 Skyhawke Technologies, Llc Time stamped imagery assembly for course performance video replay
US20120066598A1 (en) * 2010-09-15 2012-03-15 Samsung Electronics Co., Ltd. Multi-source video clip online assembly
US20120106917A1 (en) * 2010-10-29 2012-05-03 Kohei Momosaki Electronic Apparatus and Image Processing Method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1378910A2 (en) * 2002-06-25 2004-01-07 Eastman Kodak Company Software and system for customizing a presentation of digital images
GB2419768A (en) * 2004-11-01 2006-05-03 Atr Advanced Telecomm Res Inst Creating a video sequence of photographic images in accordance with metadata relating to the images
WO2010077772A1 (en) * 2008-12-17 2010-07-08 Skyhawke Technologies, Llc Time stamped imagery assembly for course performance video replay
US20120066598A1 (en) * 2010-09-15 2012-03-15 Samsung Electronics Co., Ltd. Multi-source video clip online assembly
US20120106917A1 (en) * 2010-10-29 2012-05-03 Kohei Momosaki Electronic Apparatus and Image Processing Method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2553659A (en) * 2017-07-21 2018-03-14 Weheartdigital Ltd A System for creating an audio-visual recording of an event
GB2553659B (en) * 2017-07-21 2018-08-29 Weheartdigital Ltd A System for creating an audio-visual recording of an event
US11301508B2 (en) 2017-07-21 2022-04-12 Filmily Limited System for creating an audio-visual recording of an event

Also Published As

Publication number Publication date
GB201406540D0 (en) 2014-05-28

Similar Documents

Publication Publication Date Title
US20150294686A1 (en) Technique for gathering and combining digital images from multiple sources as video
US10602058B2 (en) Camera application
US8711228B2 (en) Collaborative image capture
CN102737089B (en) Information processing apparatus and information processing method
US20120189284A1 (en) Automatic highlight reel producer
US20120226663A1 (en) Preconfigured media file uploading and sharing
US11068133B2 (en) Electronic album apparatus and method of controlling operation of same
GB2507036A (en) Content prioritization
EP3046107B1 (en) Generating and display of highlight video associated with source contents
US8943020B2 (en) Techniques for intelligent media show across multiple devices
JP4196303B2 (en) Display control apparatus and method, and program
US20120106917A1 (en) Electronic Apparatus and Image Processing Method
US20150242405A1 (en) Methods, devices and systems for context-sensitive organization of media files
WO2020066291A1 (en) Image processing device, image processing method, and image processing program
CN105956080A (en) Photo naming processing method and apparatus
JP2015198300A (en) Information processor, imaging apparatus, and image management system
CN111480168B (en) Context-based image selection
US11089071B2 (en) Symmetric and continuous media stream from multiple sources
JP5884873B1 (en) Image extraction apparatus, image extraction method, and program
GB2525035A (en) Technique for gathering and combining digital images from multiple sources as video
JP2018055534A (en) Image extraction system, image extraction method and program therefor
US10552888B1 (en) System for determining resources from image data
TWI621954B (en) Method and system of classifying image files
Fisher The Mobile Photographer: An Unofficial Guide to Using Android Phones, Tablets, and Apps in a Photography Workflow
US20140153836A1 (en) Electronic device and image processing method

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)