WO2014048576A2 - System for video clips - Google Patents

System for video clips Download PDF

Info

Publication number
WO2014048576A2
WO2014048576A2 PCT/EP2013/002917 EP2013002917W WO2014048576A2 WO 2014048576 A2 WO2014048576 A2 WO 2014048576A2 EP 2013002917 W EP2013002917 W EP 2013002917W WO 2014048576 A2 WO2014048576 A2 WO 2014048576A2
Authority
WO
WIPO (PCT)
Prior art keywords
video
video clips
wireless communication
computing hardware
software applications
Prior art date
Application number
PCT/EP2013/002917
Other languages
French (fr)
Other versions
WO2014048576A3 (en
Inventor
Aaron DEY
Steven Allen
Original Assignee
Frameblast Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB1217339.9A external-priority patent/GB2506398B/en
Priority claimed from GB1217355.5A external-priority patent/GB2506399A/en
Priority claimed from GB201217407A external-priority patent/GB2506416A/en
Application filed by Frameblast Ltd filed Critical Frameblast Ltd
Publication of WO2014048576A2 publication Critical patent/WO2014048576A2/en
Publication of WO2014048576A3 publication Critical patent/WO2014048576A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 

Definitions

  • the present disclosure relates to systems for video clips, for example to systems for capturing, editing and distributing video clips; the present disclosure also concerns methods of operating aforesaid systems.
  • the present invention also relates to media distribution systems, for example to media distribution system which are operable to distribute short video clips; the present invention also concerns methods of distributing media, for example to methods of distributing media including short video clips.
  • the present invention relates to camera apparatus implemented using mobile wireless devices, for example using smart wireless telephones, equipped with at least one optical imaging sensor; the invention is also concerned with methods of implementing a camera apparatus using a mobile wireless device, for example a smart wireless telephone, equipped with at least one optical imaging sensor.
  • the present invention relates to software products recorded on non-transient machine-readable data storage media, wherein the software products are executable upon computing hardware for implementing aforesaid methods, for example at least in part via computing hardware on a mobile wireless device.
  • Software products for editing video clips and still pictures to generate video creations are well known and are executable, as illustrated in FIG. 1 , upon a lap-top computer and/or a desk-top computer 10, namely a personal computer (PC), with a graphical display 12 of considerable screen area, for example of 19 inch (circa 50 cm) diagonal screen size, and appreciable data memory capacity, for example 4 Gbytes of data memory capacity, for storing video clips and still pictures.
  • the computer 10 includes a high-precision pointing device 14, for example a mouse-type pointing device or a tracker ball-type pointing device.
  • a given user is able to manipulate icons 16 corresponding to video clips and/or still pictures presented to the given user along a horizontal timeline 18, to control a sequence in which the video clips and/or still pictures are presented when replayed as part of a composite video creation.
  • the given user is also provided with various options presented on the graphical display 12 for adding visual effects, as well as overlaying sound tracks, for example proprietary commercial sound tracks and/or user sound tracks which the given user has stored in the data memory of the computer 10.
  • the high-precision pointing device 14 and the graphical display 12 of considerable screen area provide a convenient environment in which the given user is capable of making fine adjustments when editing the composite video creation to a completed state for release, for example, via aforementioned popular media sites.
  • Mobile wireless communication devices for example mobile telephones, namely referred to "cell phones” in the USA, first came into widespread use during the 1980's. These earlier wireless communication devices provided relatively simple user interfaces including a keyboard for dialling, and a simple display to provide visual confirmation of dialled numbers as well as simple messages, for example short messaging system (SMS) information. Since the 1980's, mobile wireless communication devices have evolved to become more physically compact, and to be equipped with more processing power and larger data memory. Contemporary mobile communication devices are distinguished from personal computers (PCs) by being of a relatively smaller physical size which will fit conveniently into a jacket pocket or small handbag, for example in an order of 10 cm long, 4 cm broad and 0.5 cm to 1 cm thick.
  • PCs personal computers
  • GUI graphical user interfaces
  • PC personal computers
  • content for example video content
  • mobile telephones have evolved to be equipped with cameras, together with more computing power and data memory, such that these mobile telephones can be used by a given user to share still pictures and video content with other mobile telephone users.
  • Many contemporary mobile telephones are Internet- enabled, such that they can be employed to surf the Internet.
  • contemporary mobile telephones have also tended to include various inbuilt sensors, for example at least one miniature camera, an accelerometer, a GPS receiver, a temperature sensor, a touch screen, in addition to a microphone and a loudspeaker required for oral telephonic activities.
  • various inbuilt sensors for example at least one miniature camera, an accelerometer, a GPS receiver, a temperature sensor, a touch screen, in addition to a microphone and a loudspeaker required for oral telephonic activities.
  • Example implementations of contemporary smart phones are described in published patent applications as provided in Table 1.
  • the mobile phone includes:
  • a directional detection module for determining whether or not a shooting direction of the mobile phone is vertical
  • the image processing module rotates an image acquired by a camera of the mobile telephone when the shooting direction is vertical.
  • the image is rotated directly inside the mobile phone, thereby avoiding a need for the user to upload the image into a computer and then to rotate the image by 90° manually.
  • the present invention seeks to provide a video clip editing system which is more convenient for users to employ, wherein the system is based upon users employing their wireless communication devices, for example their smart telephones including touch-screen graphical user interfaces, for controlling editing of video clips and/or still picture to generate corresponding composite creations, namely composite video compositions.
  • the present invention seeks to provide more convenient methods of operating a video clip editing system, wherein the methods are based upon users employing their wireless communication devices, for example their smart telephones including touch-screen graphical user interfaces, for controlling editing of video clips and/or still picture to generate corresponding composite video creation, namely composite video compositions.
  • the present invention seeks to provide a software application which is executable upon computing hardware of a contemporary smart mobile telephone for adapting the smart mobile telephone technically to function in a manner which is more convenient when editing video content to generate corresponding composite video creations.
  • the present invention also seeks to provide a media distribution system, also known as a media distribution platform, which is operable to reward more effectively given users who generate video content.
  • the present invention seeks to provide a method of distributing media in a form of video content, which rewards more effectively given users who generate video content.
  • the present invention also seeks to provide a camera apparatus, for example implemented using a contemporary mobile telephone, which provides for more convenient capture of video content in a form which is readily susceptible to being communicated by wireless.
  • the present invention seeks to provide a method of capturing video content which is more convenient for users, for example when using a contemporary mobile telephone. Furthermore, the present invention seeks to provide a software application which is executable upon computing hardware of a contemporary mobile telephone for adapting the mobile telephone technically to function in a manner which is more convenient for capturing video content.
  • a video clip editing system as defined in appended claim 1 : there is provided a video clip editing system employing a mobile telephone including computing hardware coupled to data memory, to a touch-screen graphical user interface, and to a wireless communication interface, wherein the computing hardware is operable to execute one or more software applications stored in the data memory, characterized in that the one or more software applications are operable when executed on the computing hardware to provide an editing environment on the touch-screen graphical user interface for editing video clips by user swiping-type instructions entered at the touch-screen graphical user interface to generate a composite video creation, wherein a timeline for icons representative of video clips is presented as a scrollable line feature on the touch-screen graphical user interface, and icons of one or more video clips for inclusion into the timeline are presented adjacent to the timeline on the touch-screen graphical user interface, such that video clips corresponding to the icons are incorporated onto the timeline by the user employing swiping-type instructions entered at the touch-screen graphical user interface for
  • the invention is of advantage in that executing one or more software applications on computing hardware creates an environment enabling swiping-motion inclusion of one or more video clips onto a timeline for generating a composite video creation.
  • the mobile telephone is operable to be coupled in communication with one or more external databases via the wireless communication interface, and manipulation of video clips represented by the icons is executed, at least in part, by proxy control directed by the user from the touch-screen graphical user interface.
  • the one or more software applications when executed upon the computing hardware enable one or more sound tracks to be added to one or more video clips, wherein a duration adjustment of the one or more sound tracks and/or the one or more video clips is executed automatically by the one or more software applications.
  • the one or more sound tracks are adjusted in duration without causing a corresponding shift of pitch of tones present in the sound tracks.
  • the one or more software applications executing upon the computing hardware are operable to cause the one or more video clips to be adjusted in duration by adding and/or subtracting one or more image frames from the one or more video clips.
  • the one or more software applications executing upon the computing hardware synthesize a new header or start frame of a video clip when a beginning part of the video clip is subtracted during editing.
  • the one or more software applications executing upon the computing hardware are operable to provide a selection of one or more video clips for inclusion into the timeline presented adjacent to the timeline on the touch-screen graphical user interface, wherein the selection is based upon at least one of:
  • a mobile telephone including computing hardware coupled to data memory, to a touch-screen graphical user interface, and to a wireless communication interface, wherein the computing hardware is operable to execute one or more software applications stored in the data memory, characterized in that the method includes:
  • the method further includes operating the mobile telephone to be coupled in communication with one or more external databases via the wireless communication interface, and manipulating video clips represented by the icons, at least in part, by proxy control directed by the user from the touch-screen graphical user interface.
  • the method includes enabling, by way of the one or more software applications executing upon the computing hardware, one or more sound tracks to be added to one or more video clips, wherein a duration adjustment of the one or more sound tracks and/or the one or more video clips is executed automatically by the one or more software applications. More optionally, the method includes adjusting a duration of the one or more sound tracks without causing a corresponding shift of pitch of tones present in the sound tracks. More optionally, the method includes executing the one or more software applications upon the computing hardware to cause the one or more video clips to be adjusted in duration by adding and/or subtracting one or more image frames from the one or more video clips. More optionally, the method includes executing the one or more software applications upon the computing hardware to synthesize a new header or start frame of a video clip when a beginning part of the video clip is subtracted during editing.
  • the method includes executing the one or more software applications upon the computing hardware to provide a selection of one or more video clips for inclusion into the timeline presented adjacent to the timeline on the touch-screen graphical user interface, wherein the selection is based upon at least one of: (a) temporally mutually substantially similar temporal capture time of the video clips;
  • a software application stored in machine-readable data storage media, characterized in that the software applications is executable upon computing hardware for implementing a method pursuant to the second aspect of the invention.
  • the software application is downloadable as a software application from an external database to a mobile telephone for implementing the method.
  • a media distribution system including one or more databases coupled via a communication network to users, wherein the system provides for a subset of the users to upload video content for distribution via the system to other users, characterized in that:
  • the system includes a reviewing arrangement for receiving the uploaded video content provided to the system and for generating corresponding recommendations which determine an extent to which the video content is disseminated through the system to other users;
  • the system includes a reward arrangement for rewarding users who have uploaded the video content to the system as a function of the recommendations and an extent of distribution of the video content.
  • the invention is of advantage in that the system is more rewarding for given users of the system to employ when distributing their video content.
  • the media distribution system is implemented such that the reviewing arrangement includes users who belong to at least one special interest group accommodated by the system, and the system includes an arrangement for directing the uploaded video content to the at least special interest group based on subject matter included in the uploaded video content.
  • the media distribution system is implemented such that the system is operable to generate advertisement content for presenting to a given user, wherein the advertisement content comprises an advertiser's video content combined with video content provided by the given user, or by at least one special interest group to which the given user belongs, wherein the advertiser's video content includes at least one video template into which the video content provided by the given user, or by at least one special interest group to which the given user belongs, is inserted, thereby personalizing the advertisement content to the given user.
  • the advertisement content comprises an advertiser's video content combined with video content provided by the given user, or by at least one special interest group to which the given user belongs
  • the advertiser's video content includes at least one video template into which the video content provided by the given user, or by at least one special interest group to which the given user belongs, is inserted, thereby personalizing the advertisement content to the given user.
  • the media distribution system is implemented such that the system includes an arrangement for monitoring a dissemination of the video content to users and aggregating distribution results for generating distribution analyses. More optionally, the media distribution system is implemented such that the arrangement of monitoring the dissemination is operable to monitor dissemination of composite music clips and video clips and providing analysis data indicative of association of the music clips with video clips elected by users of the system.
  • a method of operating a media distribution system including one or more databases coupled via a communication network to users, wherein the system provides for a subset of the users to upload video content for distribution via the system to other users, characterized in that the method includes:
  • the method includes arranging for the reviewing arrangement to include users who belong to at least one special interest group accommodated by the system, and arranging for the system to include an arrangement for directing the uploaded video content to the at least special interest group based on subject matter included in the uploaded video content.
  • the method includes arranging for the system to generate advertisement content for presenting to a given user, wherein the advertisement content comprises an advertiser's video content combined with video content provided by the given user, or by at least one special interest group to which the given user belongs, wherein the advertiser's video content includes at least one video template into which the video content provided by the given user, or by at least one special interest group to which the given user belongs, is inserted, thereby personalizing the advertisement content to the given user.
  • the advertisement content comprises an advertiser's video content combined with video content provided by the given user, or by at least one special interest group to which the given user belongs
  • the advertiser's video content includes at least one video template into which the video content provided by the given user, or by at least one special interest group to which the given user belongs, is inserted, thereby personalizing the advertisement content to the given user.
  • the method includes arranging for the system to include an arrangement for monitoring a dissemination of the video content to users and aggregating distribution results for generating distribution analyses. More optionally, the arrangement of monitoring the dissemination is operable to monitor dissemination of composite music clips and video clips and providing analysts data indicative of association of the music clips with video clips elected by users of the system.
  • a software product recorded on machine-readable data storage media, characterized in that the software product is executable upon computing hardware for executing a method pursuant to the fifth aspect of the invention.
  • a camera apparatus including a wireless communication device incorporating computing hardware coupled to a data memory, to a wireless communication interface for communicating data from and to the wireless communication device, to a graphical user interface for receiving user input, and to an optical imaging sensor for receiving captured image data therefrom, wherein the computing hardware is operable to execute one or more software applications for enabling the optical imaging sensor to capture one or more images, and for storing corresponding image data in the data memory and/or for communicating the corresponding image data from the wireless communication device via its wireless communication interface, wherein the wireless communication device has an elongate external enclosure having a longest dimension (L) defining a direction of a corresponding elongate axis for the wireless communication device, characterized in that
  • the one or more software applications are operable to enable the wireless communication device to capture images when the wireless communication device is operated by its user such that the elongate axis is orientated in substantially an upward direction, wherein the one or more software applications are operable to cause the computing hardware to select sub- portions of captured images provided from the optical imaging sensor and to generate corresponding rotated versions of the selected sub-portions to generate image data for storing in the data memory and/or for communicating via the wireless communication interface;
  • the one or more software applications are operable to enable the wireless communication device to capture the one or more images as one or more video clips in response to the user providing tactile input at an active region the graphical user interface, wherein each video clip is of short duration (D) and is a self-contained temporal sequence of images.
  • the invention is of advantage in that the camera apparatus is more convenient to employ on account of its substantially vertical operating orientation and its manner of operation to generate self-contained video clips of convenient duration (D) for subsequent processing.
  • substantially vertical it is meant that the elongate axis is within 45° of vertical direction, more preferably is within 30° of vertical direction, and most preferably is within 20° of vertical direction.
  • the short duration (D) is in a range of 1 second to 20 seconds, more preferable in a range of 1 second to 10 seconds, and most preferable substantially 3 seconds.
  • the wireless communication device includes a sensor arrangement for sensing an angular orientation of the elongate axis of the wireless communication device and generating a corresponding angle indicative signal
  • the one or more software applications are operable to cause the computing hardware to receive the angle indicative signal and to rotate the sub- portions of the captured images so that they appear when viewed to be upright and stable images.
  • the one or more software applications are operable when executed upon the computing hardware to present one or more icons representative of video clips upon the graphical user interface, and one or more icons representative of sorting bins into which the one or more icons representative of video clips are susceptible to being sorted, wherein sorting of the one or more icons representative of video clips into the one or more icons representative of sorting bins is invoked by a user swiping motion executed by a thumb or finger of the user on the user graphical interface, wherein a given icon representative of a corresponding video clip is defined at a beginning of the swiping motion and a destination sorting bin for the selected icon representative of a corresponding video clip is defined at an end of the swiping motion.
  • the one or more software applications executing upon the computing hardware are operable to cause the one or more icons representative of video clips upon the graphical user interface to be sorted to be presented in a scrollable array along a longest length dimension of the graphical user interface.
  • the one or more software applications executing upon the computing hardware are operable to cause the one or more icons representative of video clips upon the graphical user interface to be sorted to be presented in a spatial arrangement indicative of a time at which the video clips were captured by the optical imaging sensor.
  • at least one of the one or more icons representative of sorting bins, into which the one or more icons representative of video clips are susceptible to being sorted is a trash bin, wherein the computing hardware is operable is present the user with a graphical representation option for emptying the trash bin to cause data stored in the data memory corresponding to contents of the trash bin to be deleted for freeing data memory capacity of the data memory.
  • the one or more software applications are operable when executed upon the computing hardware to enable the wireless communication device to upload one or more video clips from the data memory to one or more remote proxy servers and to manipulate the one or more video clips uploaded to the one or more proxy servers via user instructions entered via the user graphical interface.
  • a method of implementing a camera apparatus using a wireless communication device incorporating computing hardware coupled to a data memory, to a wireless communication interface for communicating data from and to the wireless communication device, to a graphical user interface for receiving user input, and to an optical imaging sensor for receiving captured image data therefrom, wherein the computing hardware is operable to execute one or more software applications for enabling the optical imaging sensor to capture one or more images, and for storing corresponding image data in the data memory and/or for communicating the corresponding image data from the wireless communication device via its wireless communication interface, wherein the wireless communication device has an elongate external enclosure having a longest dimension (L) defining a direction of a corresponding elongate axis for the wireless communication device, characterized in that the method includes:
  • substantially vertical it is meant that the elongate axis is within 45° of vertical direction, more preferably is within 30° of vertical direction, and most preferably is within 20° of vertical direction.
  • the short duration (D) is in a range of 1 second to 20 seconds, more preferable in a range of 1 second to 10 seconds, and most preferable substantially 3 seconds. Other durations are optionally possible for the short duration (D).
  • the method includes using a sensor arrangement of the wireless communication device for sensing an angular orientation of the elongate axis of the wireless communication device and generating a corresponding angle indicative signal, and employing the one or more software applications to cause the computing hardware to receive the angle indicative signal and to rotate the sub-portions of the captured images so that they appear when viewed to be upright and stable images.
  • the method includes employing the one or more software applications when executed upon the computing hardware to present one or more icons representative of video clips upon the graphical user interface, and one or more icons representative of sorting bins into which the one or more icons representative of video clips are susceptible to being sorted, wherein sorting of the one or more icons representative of video clips into the one or more icons representative of sorting bins is invoked by a user swiping motion executed by a thumb or finger of the user on the user graphical interface, wherein a given icon representative of a corresponding video clip is defined at a beginning of the swiping motion and a destination sorting bin for the selected icon representative of a corresponding video clip is defined at an end of the swiping motion.
  • the method includes employing the one or more software applications executing upon the computing hardware to cause the one or more icons representative of video clips upon the graphical user interface to be sorted to be presented in a scrollable array along a longest length dimension of the graphical user interface.
  • the method includes employing the one or more software applications executing upon the computing hardware to cause the one or more icons representative of video clips upon the graphical user interface to be sorted to be presented in a spatial arrangement indicative of a time at which the video clips were captured by the optical imaging sensor.
  • the method includes employing the one or more software applications to cause the at least one of the one or more icons representative of sorting bins, into which the one or more icons representative of video clips are susceptible to being sorted, to be a trash bin, wherein the computing hardware is operable is present the user with a graphical representation option for emptying the trash bin to cause data stored in the data memory corresponding to contents of the trash bin to be deleted for freeing data memory capacity of the data memory.
  • the method includes employing the one or more software applications when executed upon the computing hardware to enable the wireless communication device to upload one or more video clips from the data memory to one or more remote proxy servers and to manipulate the one or more video clips uploaded to the one or more proxy servers via user instructions entered via the user graphical interface.
  • a software product recorded on machine-readable data storage media, characterized in that the software product is executable upon computing hardware for implementing a method pursuant to the eighth aspect of the invention.
  • the software product is downloadable from an App store to a wireless communication device including the computing hardware. It will be appreciated that features of the invention are susceptible to being combined in various combinations without departing from the scope of the invention as defined by the appended claims.
  • FIG. 1 is an illustration of a contemporary laptop or desktop computer arranged to execute software products for providing a user environment for editing video clips and/or still pictures to generate corresponding composite video creations;
  • FIG. 2 is an illustration of a conventional platform for distributing video content
  • FIG. 3 is an illustration of a contemporary smart telephone which is operable to execute one or more software applications for implementing the present invention
  • FIG. 4 is an illustration of an editing environment provided on the contemporary smart telephone of FIG. 3;
  • FIG. 5 is an illustration of timeline icons and transverse icons presented to a given user in the editing environment of FIG. 4;
  • FIG. 6 is an example of sound analysis employed in the smart telephone of FIG.
  • FIG. 7 is an example of sound track editing performed without altering tonal pitch of the sound track
  • FIG. 8A to FIG. 8D are illustrations of video editing which is implementable using the smart telephone of FIG. 3;
  • FIG. 9 is an illustration of a media distribution system pursuant to the present invention.
  • FIG. 10 is an illustration of advertisement content generation pursuant to the present invention.
  • FIG. 11 is an illustration of an application of the present invention in association with a social event, for example a sports event;
  • FIG. 12 is an illustration of a conventional contemporary mobile telephone
  • FIG. 13 is an illustration of a contemporary mobile telephone and active elements included within the contemporary mobile telephone
  • FIG. 14 is an illustration of a contemporary mobile telephone adapted to implement a camera apparatus pursuant to the present invention.
  • FIG. 15 is an illustration of image field manipulation adopted when implementing the present invention on the contemporary mobile telephone of FIG. 14;
  • FIG. 16 is an illustration of image stabilization performed on the contemporary mobile telephone of FIG. 14;
  • FIG. 17 is an illustration of video clip sorting implemented on the contemporary mobile telephone of FIG. 14.
  • FIG. 18 is an illustration of video clip uploading from the contemporary mobile telephone of FIG. 14 to an external proxy database.
  • an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent.
  • a non-underlined number relates to an item identified by a line linking the non-underlined number to the item.
  • the non-underlined number is used to identify a general item at which the arrow is pointing.
  • the present invention is concerned with a wireless communication device 100, for example a contemporary smart telephone, which includes computing hardware 110 coupled to a data memory 120, to a touch-screen graphical user interface 130, and to a wireless communication interface 140.
  • the wireless communication device 100 is operable to communicate via a cellular wireless telephone network 150, for example to one or more external databases 160.
  • the computing hardware 110 and its associated data memory 120 are of sufficient computing power to execute software applications 200, namely "Apps", downloaded to the wireless communication device 100 from the one or more external databases 160, for example from an "App store" thereat.
  • the wireless communication device 100 includes an exterior casing 250 which is compact and generally elongate in form, namely having a physical length dimension L to its spatial extent which is longer than its other width and thickness physical dimensions W, T respectively; an elongate axis 260 defines the length dimension L as illustrated.
  • the devices it is customary for the devices to have substantially front and rear major planar surfaces 270, 280 respectively, wherein the front major surface 270 includes the touch-screen graphical user interface 130 and a microphone 290, and wherein the rear major surface 280 includes an optical imaging sensor 300, often referred to as being a "camera”.
  • the wireless communication device 100 When employed by a given user, the wireless communication device 100 is most conveniently employed in an orientation in which the elongate axis 260 is observed from top-to-bottom by the given user, for example such that the microphones 290 is beneath the touch-screen graphical user interface 130 when viewed by the given user.
  • a software application 200 for implementing the present invention is pre-loaded into the data memory 120 of the wireless communication device 100 and/or is downloaded from the one or more external database 160 onto the data memory 120 of the wireless communication device 100.
  • the software application 200 is executable upon the computing hardware 110 to generate an environment for the given user to edit video clips and/or still pictures via the touch-screen graphical user interface 130, namely an environment which is convenient to employ by the given user, despite the limited size and pointing resolution of the graphical user interface 130, which functions in a manner which is radically different to that provided from known contemporary video editing software as aforementioned for use in laptop and desktop computers.
  • An example user environment presented on the touch-screen graphical user interface 130 by execution of the software application 200 upon the computing hardware 110 will now be described in greater detail.
  • FIG. 4 there is shown the touch-screen graphical user interface 130 in an orientation as viewed by the given user when executing editing activities pursuant to the present invention; the elongate axis 260 is conveniently orientated from top-to-bottom.
  • the software application 200 executing upon the computing hardware 10 presents a time line 400 from top-to-bottom.
  • This timeline 400 represents a temporal order in which video clips are assembled into a composite video creation.
  • a series of icons 410 presented along the timeline 400 range from an icon to ⁇ (n), where there are n icons 410 corresponding to video clips to be accommodated in the composite creation; optionally, n is so large that not all icons 410 from to ⁇ (n) can be shown simultaneously on the touch-screen graphical user interface 130, requiring a swipe- scrolling action by the given user to examine and manipulate them as will be described later.
  • the integer n is initially user-defined; alternatively the given user can add as desired one or more additional icons 410 within the series of icons 410 as required, and given user can also subtract as desired one or more icons 410 from the series of icons 410 as required.
  • the given user can move along the series of icons 410 on the touch-screen graphical user interface 130 to work on a given desired icon 410.
  • the software application 200 executing upon the computing hardware 110 is operable to cause a selection of video clips represented as icons 510 to appear which can be inserted by user-selection for inclusion to be represented by the icon
  • the icons 510 are shown as a traverse series which are scrollable by way the given user performing a transverse finger or thumb swiping motion along the transverse axis 450 on the touch-screen graphical user interface 130.
  • the icons 510 when scrolled are overlaid onto the icon l(/) on the touch-screen graphical user interface 130; the given user can incorporate the video clip corresponding the icon 510 overlaid onto the given icon by tapping the touch-screen graphical user interface 130 at the icon l( ), else depressing an "add" button area 520 provided along a side of the touchscreen graphical user interface 130.
  • the given user progresses up and down the series of icon 410 until all desired video clips from the icons 510 are incorporated into the icons 410.
  • Incorporation of user-selected icons 510 into the icons 410 as aforementioned causes corresponding movement or linking of video data corresponding to the icons 510. Such linking of video data can occur:
  • manipulation of video data for example uploading of video data from the wireless communication device 100 to the one or more external databases 160, is beneficially implemented when the given user has completed a session of editing along the timeline 400, thereby reducing a need to communicate large volumes of data via the cellular wireless telephone network 150., for example by way of the given user depressing an "execute edit " button area 530 of the touch-screen graphical user interface 130.
  • the given user can play corresponding video on the touch-screen graphical user interface 130 by tapping the icon 410, 510, alternatively places a desired icon to be played at an intersect of the timeline 400 and the axis 450 and then taps the touch-screen graphical user interface 130 at the insect, alternatively depressing an "play" button area 540 of the touch-screen graphical user interface 130.
  • the computing hardware 110 When the video data corresponding to the selected icon 410, 510 resides in the data memory 120, the computing hardware 110 merely plays a low-resolution version of the selected video content to remind the given user of the content of the video content; alternatively, when the video data corresponding to the selected icon 410, 510 resides in the one or more external databases 160, a low-resolution of the selected video content is optionally streamed to the wireless communication device 100 in real time from the one or more external databases 160.
  • the touch-screen graphical user interface 130 as seen in FIG. 4 and FIG. 5 may also optionally have an orientation in a horizontal mode where the wireless communication device 100 is rotated substantially 90 degrees clock wise or more preferably substantially 90 degrees counter clock wise. This may be useful for certain operations that allow the user to see a wider screen of the video and then allowed to return to the vertical mode if and when required. Futher there is also an option of having a means of video preview in the user interface in both the shown vertical mode of FIG. 4 and FIG. 5, and the described horizontal mode of operation. The video preview is overlayed or may be moved around in the user interface 130 to assist in the editing or for viewing of the partially or fully completed video.
  • the software application 200 is capable of providing a high degree of automatic coupling of video clips together to generate the composite video creation. It enables the given user not only to capture video clips using his/her wireless communication device 100, but also enables the given user to compose complex composite video creations from his/her wireless communication device 100; such functionality is inadequately catered for using contemporarily available software applications.
  • the icons 510 presented along the transverse axis 450 are chosen by execution of the software application 200 to be in graded relevance, for example one or more of:
  • a next video clip of similar type of video content to video clips preceding or following the icon along the timeline 400, thus enabling the given user to maintain a given theme in the video clips along the timeline 400 when composing the composite video creation, for example a given video clip is a picture of the given user's child eating French ice cream, and a next video clip along the timeline 400 presented as an option along the transverse axis 450 is a video clip of the Eiffel Tower in Paris, for example derived from a common database of video clips maintained at the one or more external databases 160;
  • a next video clip proposed along the transverse axis 450 is captured from a generally similar geographical area as pertaining to video clips preceding or following the icon l(/) along the timeline 400, for example determined by the video clips having associated therewith metadata including GPS and/or GPRS position data which can be searched for relevance; one or more sound tracks proposed along the transverse axis 450, for example one or more music tracks, to be added to the video clip selected by the given user for the icon l(/); the one or more sound tracks can be those captured by the given user, alternatively for example derived from a common database of sound tracks maintained at the one or more external databases 160; and
  • (iii) F3 by temporally stretching and/or shrinking one or more of the video clip and the music track so that they mutually temporally match.
  • the software product 200 are operable to load a given sound track 600 to be analysed into data memory, for example into the data memory 120 or corresponding proxy memory at the one or more external databases 160.
  • the sound track 600 is represented by a signal s(/) which has signal values s(1) to s(m) from its beginning to its end, wherein j and m are integers, and j represents temporal sample points in the signal s( ) and has a value in a range from 1 to m.
  • the signal s(/) typically has many hundred thousand sample points to many millions of sample points, depending upon temporal duration of the signal s( ) from 1 to m.
  • the signal s(/) is a multichannel signal, for example a stereo signal.
  • the signal s(y) is subjected to processing by the software application 200 executing upon the computing hardware 110, alternatively or additional by corresponding software applications at the one or more external databases 160 under proxy control of the wireless communication device 100, to apply temporal bandpass filtering denoted by 610 using digital recursive filters and/or a Fast Fourier Transform (FFT) to generate an instantaneous harmonic spectrum h(/, f) of the signal s(/) at each sample point j along the signal s(/), wherein h is an amplitude of a harmonic component and f is a frequency of the harmonic component as illustrated in FIG. 6.
  • FFT Fast Fourier Transform
  • Certain instruments such as cymbals and bass drums defining beat generate a particular harmonic signature which occurs temporally repetitively in the harmonic spectrum h as a function of the integer j.
  • a period of the harmonic signature of the certain instruments defining beat can be determined by subjecting the harmonic spectrum h(y, ⁇ , for a limited frequency range fi to corresponding to the harmonic signature of such instruments, to further recursive filtering and/or Fast Fourier Transform (FFT), denoted by 620, as a function of the integer j to find a duration of the beat, namely bar, from a peak in the spectrum generate by such analysis 620.
  • FFT Fast Fourier Transform
  • the signal s(/) can then be cut by the software application 200 executing upon the computing hardware 110, alternative by proxy at the one or more external databases 160, to provide automatically an edited sound track which is cut cleanly at a beat or bar in the original music track represented by the signal s( ) typically.
  • Such an analysis approach can also be used to loop back at least a portion of the sound track to extend its length, wherein loopback occurs precisely at a beat or bar-end in the music track.
  • the analysis 610 also enables the music track 600 to be analysed whether or not it is beat music or slowly changing effects music, for example meditative organ music having long sustained tones, which is more amenable to fading pursuant to aforesaid technique F1.
  • the software product 200 are operable to load a sound track 700 to be analysed into data memory, for example into the data memory 120 or corresponding proxy memory at the one or more external databases 160.
  • the sound track 700 is represented by a signal s(/) which has signal values s(i) to s(m) from its beginning to its end, wherein j and m are integers, and j represents temporal sample points in the signal s(j) and has a value from 1 to m.
  • the signal s(j) typically has many hundred thousand sample points to many millions of sample points, depending upon temporal duration of the signal s(/) from 1 to m.
  • the signal s(j) is a multichannel signal, for example a stereo signal.
  • the signal s(y) is subjected by the software application 200 executing upon the computing hardware 110, alternatively or additional by corresponding software applications at the one or more external databases 160 under proxy control of the wireless communication device 100, to apply temporal bandpass filtering denoted by 710 using digital recursive filters and/or a Fast Fourier Transform (FFT) to generate an instantaneous harmonic spectrum h(/, f) of the signal s(j) at each sample point j along the signal s(/), wherein h is an amplitude of a harmonic component and f is a frequency of the harmonic component as illustrated in FIG. 6.
  • FFT Fast Fourier Transform
  • a slowed-down or speeded up sound track is represented by h' cfe.y, f), wherein di and ck are mutually different.
  • the duration d 2 can be chosen so the sound track h"(cf 2 ), when subject to an inverse Fast Fourier Transform (i-FFT), denoted by 720, is of similar duration to a video clip, or series of video clips, to which the sound track h"(cfc.y, f) is to be added.
  • i-FFT inverse Fast Fourier Transform
  • the software application 200 allows the given user to alter the tempo of the music track within a duration of the music track, for example to slow down the music track at a time corresponding to a particular event occurring in the video clip for artistic or dramatic effect to make the composite video creation more exciting or interesting for subsequent viewers therefore, for example when the composite video creation is shared over aforesaid social media, such slowing down or speeding up of tempo of the music track without altering the frequency of tones in the music track is not a feature provided in contemporary video editing software, even for lap-top and desk-top personal computers.
  • the software application 200 is capable of processing video clips to extend their length or shorten their length for rendering them compatible in duration with sound tracks, for removing irrelevant or undesirable video subject matter and similar.
  • the software application 200, or corresponding software applications executing at the one or more external databases 160 under proxy control from the software application 200, when executed upon the computing hardware 110 are operable to enable a video clip 800 to be manipulated in data memory, for example in the data memory 120.
  • the video clip 800 includes a header frame 810, for example an initial l-frame when in MPEG format, and a sequence thereafter of dependent frames, for example P-frames and/or B-frames when in MPEG format.
  • a new header frame 830 is synthesized by the software application 200 or its proxy as aforementioned.
  • additional frames are added which cause the video clip 800 to replay more slowly, or momentarily pause, for example by adding one or more P-frames and/or B-frames 840 when in MPEG format; this is illustrated in FIG. 8B.
  • the added one or more P-frames and/or B-frames correspond to causing the video track 800 to loop back along at least a part of its sequence of images.
  • one or more frames 860 are removed from video clip 800 after its initial header 810, for example one or more B-frames or P-frames when in MPEG format, and remaining abutting frames either side of where the one or more frames have been removed are then amended to try to cause as smooth a transition as possible between the abutting frames; this is experienced when the video is replayed as a momentary visual jerking motion or sudden angular shift in a field of view of the video clip. As illustrated in FIG.
  • the video clip 800 can also be extended using the software application 200 and/or corresponding software applications executing at the one or more external databases 160 under proxy control from the software application 200, by inserting supplementary subject matter 900, for example experienced when viewing the video clip as an still image relevant to the subject matter of the video clip 800; for example, the video clip 800 is taken along a famous street in Sweden, and then a brief picture of Gamla Stan in Sweden is briefly shown for extending a duration of the video clip 800.
  • the software application 200 selects the inserted subject matter 900 from metadata associated with the video clip 800, and/or by analysing the video clip 800 to find related subject matter, for example by employing neural network analysis or similar.
  • the subject matter 900 is inserted into the video clip 800 by dividing the video clip 800 into two parts 800A, 800B, each with their own start frame, for example each with its own I -frame when implemented in MPEG, and then inserting the subject matter 900 as illustrated between the two parts 800A, 800B.
  • the software application 200 is thus capable of executing automatic editing of video clips and/or sound tracks so that they match together in a professional manner, wherein such automation is necessary because the a touch-screen graphical user interface 130 provides insufficient pointing manipulation accuracy and/or viewed visual resolution, especially when the given user has impaired eyesight, to enable precise manual editing operations to be performed.
  • the software application 200 and/or its proxy may not always achieve an aesthetically perfect edit; beneficially, along the transverse axis 430, the software application 200 is operable to present the given user with a range of aforementioned edits to match video clips and sound tracks together, for example generated using a random number generator to control aspects of the editing, for example where frames are added or removed, where a music track is cut at an end of a music bar selected, at least in part depending upon a random number, so that the given user can select amongst the proposed edits implemented automatically by the software applications 200 to select a best automatically generated edit.
  • the series of edits proposed by the software application 200 and/or its proxy are filtered for highlighting types of edits which the software application 200 recognizes to be in a taste of the given user, for example based upon an analysis of earlier choices made by the given user when selecting amongst automatically suggested editing video clips and sound tracks, for example by way of neural network analysis of the given user's earlier choices.
  • the software application 200 is capable of operating in an adaptive manner to the given user.
  • the given user When the given user has completed generation of the composite video creation, stored at least in one of the data memory 120 and one or more external databases 160, the given user is able to employ the software application 200 executing upon the computing hardware 110 to send the composite video creation to a web-site for distribution to other users, and/or to a data store of the given user for archival purposes.
  • the web-site for distribution can be, for example, a social media web-site, or a commercial database from which the composite video creation is licensed or sold to other uses in return for payment back to the given user.
  • the present invention thereby enables the given user both to capture video clips and sound tracks using his/her wireless communication device 100, for example smart telephone, as well as using his/her wireless communication device 100 to edit the video clips and sound tracks to generate composite video creations for distribution, for example in return for payment.
  • the present invention is pertinent, for example, to poorer parts of the World where the given user may be able to afford the wireless communication device 100, but cannot afford in addition a lap-top computer or desktop computer.
  • composite video creations using their smart telephones such users from poorer parts of the World are able to become "film producers" and thereby vastly increase a choice of video content available around the World to the benefit of humanity as a whole.
  • a media distribution system 1100 is especially well adapted for handling short video clips, for example substantially 3 second duration video clips generated from mobile wireless devices 1110, for example contemporary smart telephones, for example the aforementioned wireless communication device 100.
  • the system 1100 includes, for example hosts, the aforesaid one or more databases 160.
  • the devices 1110 each include computing hardware 1120 coupled to data memory 1130, to a wireless interface 1140, to a graphical user interface 1150, to a video camera 1160, and to a microphone 1170 and loudspeaker 1180.
  • the media distribution system 1100 includes a communication network 1200 for receiving video content from the devices 1110 and also one or more databases 1210 for storing users' video content which has been uploaded from their devices 1110.
  • the system 1100 is optionally operable to receive video content from other sources, for example from contemporary digital cameras whose video content is uploaded, for example via a personal computer (PC), through the Internet to the one or more databases 1210.
  • the media distribution system 1100 operates in several ways which is radically different to the platform 40 of FIG. 2, namely:
  • the system 1100 communicates the video content 1250 provided from one or more of the devices 1110 of one or more given users 1260 to a group of reviewing users 1270 who assess the communicated video content 1250 and make a recommendation 1280, for example a rating of the video content 1250, a decision YES/NO whether or not the video content should be used for a given purpose and so forth.
  • the group of reviewing users 1270 is implemented, at least in part, automatically using computing machine- intelligence, for example by way of neural networks implemented in computing hardware and software, and/or by rule rule-based analysis, for example employing one or more stages of Fourier analysis of sound signals, and spatial
  • the system 1100 establishes special interest groups of users who have a mutually common interests, for example steam trains, classical pipe organs, stamp collecting, antiques, cycling, golf and so forth; each user is optionally a member of several different interest groups; optionally, the group of reviewing users 1270 in (a) are users in a special interest group reviewing video content whose subject matter pertains to the special interest group;
  • the system 1100 is operable to provide the given users 1260 with video advertisements which at least one of: include as an integral part thereof one or more video clips generated by a given user 1260 to which the advertisements are presented, include as an integral part thereof one or more video clips generated by one or more users who are in a similar special interest group to a given user 1260 to which the advertisements are presented.
  • the system 1100 is thus operable to generate video advertisements as illustrated in FIG.
  • the system 1100 establishes special interest groups of users who have a mutually common interests, for example steam trains, classical pipe organs, stamp collecting, antiques, cycling, golf, music tracks, music songs and so forth as aforementioned; each user is optionally a member of several different interest groups; optionally, the special interest groups are determined automatically by performing an analysis of a manner in which the given users 1260 view, combine together or couple together one or more video clips 1330.
  • special interest groups for example steam trains, classical pipe organs, stamp collecting, antiques, cycling, golf, music tracks, music songs and so forth as aforementioned; each user is optionally a member of several different interest groups; optionally, the special interest groups are determined automatically by performing an analysis of a manner in which the given users 1260 view, combine together or couple together one or more video clips 1330.
  • the system 1100 determines therefrom that there is an increased probability that the substantially temporally constant set of video clips 1330 all belong to a mutually similar special interest group; optionally, the system 1100 determines that a given video clip 1330 is likely to belong to a given special interest group from probabilities of association between the given clip and the special interest group based upon probabilities derived from a plurality of users 1260.
  • the system 1100 is optionally configured so that a given user 1260 is able to select a given first video clip 1330A and to add thereto a second video clip 1330B to generate a third composite video clip 1330C; such combining optionally occurs in the device 1110, or alternatively in a proxy manner at the one or more databases 1210 via control from the device 1110 of the given user 1260.
  • Combining the first and second video clips 1330A, 1330B together is logged by the system 1100 as being an increased probability that the first and second video clips 1330A, 1330B belong to video clips associated with a given special interest group represented by the given user 1260.
  • Such combining includes, for example, downloading a musical video clip 1330A generated by the first user 1260, for example a first musician playing a backing track for a piece of music, and a second user 1260 recording in real time a second video clip 1330B whilst replaying the musical video clip 1330A, and then combining them to provide a third composite video clip 1330C which is uploaded to the one or more databases 1210 for other users 1260 subsequently to download, or control via proxy, to add their video clips thereto.
  • Such an approach associated the users 1260 into mutually similar special interest groups, as well as identifying that the video clips 1330A, 1330B, 1330C are all likely to belong to a group of video clips pertaining to the special interest group including the first and second users 1260.
  • such determination of special interest groups occurs in the system 1100 by way of user operations when managing video clips 1330, without any assessment being necessary from the reviewing users 1270.
  • the users 1260 pay a fee to the system 1100 for being permitted to use the first video clip 1330A by adding their second video clip 1330B to generate the combined video clip 1330C, for example a royalty payment to the first user 1260.
  • such payment takes into account an extent, for example number of download and/or viewings, to which the composite video clip 1330C is accessed by other users 1260 of the system 1100.
  • the present invention is capable of bringing many benefits in comparison to contemporary aforesaid known platforms such as Facebook and YouTube. Advertisers who elect to advertise via the system 1100 will have much greater impact with their advertisement because including one or more video clips of the given user 1260 into advertisements presented to the given user 1260 will achieve more sympathy and attention form the given user 1260, and thus improved advertising impact. Similarly, video clips generated by other users in a similar special interest group to that of the given user 1260 are more likely to be of interest to the given user 1260, hence resulting in the advertisements being more sympathetically received by the given user 1260, hence causing the given user 1260 to be more positively inclined towards the products and/or services presented to the given user 1260 in the advertisements.
  • the system 1100 is also capable of functioning as a "sales market" for video clips. This enables each user of a device 1110 to become their own film director and receive benefit, for example financial benefit, if their videos receive a good recommendation and are subsequently employed by other users of the system 1100. This enables poorer people in poorer parts of the World to have an alternative source of income, as well as educating other parts of the World, for example the rich First World, about conditions in poorer parts of the World by way of short video clips, namely an aim promoted by UNESCO and other international relief agencies. Moreover, news gathering agencies are able to offer payment for video clips provided via the system 1100 from various parts of the World from where they are desirous to gather news, for example from natural disaster areas and from war afflicted areas.
  • An example use of the system 1100 is in a social event as depicted in FIG. 11 , for example a sports event.
  • a sports event occurs in an arena 1400, for example in a football pitch, in a tennis pitch, in a boxing ring, or similar.
  • the given users 1260 are present in a spectator or viewer area surrounding at least a part of a periphery of the arena 1400.
  • the given users 1260 are spectators and employ their mobile wireless devices 1 10 to take short video clips of the event.
  • the short video clips are communicated by wireless to a group of reviewing users 1270 who are responsible for the event and who rapidly review substantially in real time the short video clips to determine which would be most suitable to display in substantially real time on a large display screen 1410 mounted adjacently to the arena 1400.
  • the short video clips of given users 1260 that are presented on the large display screen 1410, for example a projection display screen, are rewarded by the group of reviewing users 1270 providing the given users 1260 with a form of bonus, for example free tickets to other sports events or financial payment.
  • Such an arrangement is capable of increasing interest in sport events attended by the given users 1260 and also provides the reviewing users 1270 with a collectioa of short video clips which can be employed for generating video content for communication via international communication to other parts of the World for generating sports televising revenues.
  • a sports event is described in the foregoing, the system 1100 is susceptible to being employed at music concert events, ceremonial events and so forth.
  • the group of reviewing users 1270 is implemented, at least in part, automatically using computing machine-intelligence, for example by way of neural networks implemented in computing hardware and software, and/or by rule rule- based analysis, for example employing one or more stages of Fourier analysis of sound signals, and spatial Fourier analysis of images in video content.
  • a classifying function performed by the reviewing users 1270 is optionally performed as described in the foregoing by way of patterns of user downloading of video clips 1330, viewing of video clips 1330, and by a manner in which video clips 1330 are combined to generate corresponding composite video clips 1330.
  • the system 1100 is also capable of extracting aggregate data from the flow of short video clips communicated via the system 1100, for example data pertaining to specialist interest groups.
  • This aggregated data for example trend analysis data, maintains the confidentiality of individual given users 1260, but provides useful insight into dissemination of information.
  • the system 1100 is capable by monitoring dissemination of the composite video content to determine the following:
  • a popularity of a given sound clip for example a song or instrumental music piece
  • a video context in which users consider the given sound clip to be most suitable.
  • Such information is potentially useful to popular music bands that are desirous to release their latest music productions in a manner which will generate most sales and will also assist to increase an awareness and profile of the popular music bands.
  • the system 1100 is able to offer such an investigative aggregation service to popular music bands and similar in return for payment.
  • system 1100 is capable of providing novel features which render it more beneficial to users in comparison to contemporary known content distribution platforms 40 such as YpuTube and similar.
  • short video clips for example video clips having a playing duration in a range of 1 to 10 seconds, more optionally substantially 3 seconds
  • present invention is mutatis mutandis also relevant for the distribution of still pictures.
  • the wireless communication device 2010 is optionally implemented as the aforementioned device 1110 or the aforementioned device 100.
  • the device 2010 is, for example a compact contemporary smart phone.
  • the device 2010 includes computing hardware 2020 coupled to a data memory 2030, a touch-screen graphical user interface 2040, a wireless communication interface 2050, an optical pixel array sensor 2060, and a microphone 2070 and associated speaker 2080 for user oral communication.
  • the wireless communication device 2010 is operable to communicate via a cellular wireless telephone network denoted by 2085.
  • the computing hardware 2020 and its associated data memory 2030 are of sufficient computing power to execute software applications, namely "Apps", downloaded to the wireless communication device from an external database 2090, for example from an "App store”.
  • the wireless communication device 2010 includes an exterior casing 2100 which is compact and generally elongate in form, namely having a physical dimension to its spatial extent which is longer along an elongate axis 2110 than its other physical dimensions; for example, the exterior casing 2100 has a length L along the elongate axis 2110 which is greater than its width W, and also greater than its thickness T.
  • the device 2010 it is customary for the device 2010 to have substantially mutually parallel front and rear major planar surfaces 2120, 2130 respectively, wherein the touch-screen graphical user interface 2040 is implemented in respect of the front major planar surface 2120 and the optical pixel array sensor 2060 is implemented in respect of the rear major planar surface 2130.
  • Such an implementation enables the device 2010 to be employed for oral dialogue, namely conversations, when the users are in a standing state and the elongate axis 2110 is orientated in a substantially vertical manner.
  • the device 2010 it is contemporary design practice for the device 2010 to be rotated by 90° when the device 2010 is to be employed in its camera mode; in such a camera mode, a user holds the device 2040 at its elongate ends with both user hands, such that the elongate axis 2110 is substantially horizontal, and the optical pixel array sensor 2060, is orientated typically away from the user towards a scene of interest whilst the touch-screen graphical user interface 2040 presents to the user in real time a view as observed from the optical pixel array sensor 2060.
  • the user then depresses a region of the touch-screen graphical user interface 2040 to capture an image as observed from the optical pixel array sensor 2060 and stores it as a corresponding still image data in the data memory 2030, for example in JPEG or similar contemporary coding format.
  • the user is alternatively able to depresses a region of the touchscreen graphical user interface 2040 to capture a sequence of video images as observed from the optical pixel array sensor 2060 and store it as corresponding video content data in the data memory 2030, for example in MPEG or similar contemporary coding format; the user depresses a region of the touch-screen graphical user interface 2040 to terminate capture of the sequence of video images.
  • the user can elect to communicate via wireless the still image data and/or video content data to other users, or onto an external database, for example a "cloud" database residing in the Internet, for archival purposes or for further processing.
  • Such further processing is performed, for example, using video editing software executable upon laptop or desktop personal computers (PCs) which are connectable to the Internet, for example to download data from the "cloud” database.
  • PCs desktop personal computers
  • a mobile wireless communication device for example the aforementioned device 2010, can be adapted by executing a suitable software application upon its computing hardware 2020 to operate in a manner which is technically more user-convenient and generates video content data which is more manageable to edit using the device 2010 and more efficient in its use of wireless communication bandwidth when communicated to an external database.
  • At least one novel software application 2200 is downloaded from the external database 2090, alternatively already preloaded onto the device 2010.
  • the novel software application 2200 enables the device 2010 to be employed as a camera apparatus for capturing still images and sequences of video images when the device 2010 is orientated with its elongate axis 2110 in substantially a vertical orientation; in other words, the software application 2200 when executed upon the computing hardware 2020 coupled to a data memory 2030 enables the computing hardware 2020 to perform following operations:
  • the rotation-corrected sub-images 2240 are stored in the data memory 2030, optionally together with a copy of the temporal sequence of images 2220 being retained in the data memory 2030.
  • an accelerometer included within the device 2010 and coupled to the computing hardware 2020 provides an angular signal of an orientation of the elongate axis 2110, and the rotation applied in aforesaid step (c) is made a function of the angular signal so that the rotation-corrected sub-images 2240 always appear in an upright orientation, despite the user varying an orientation of the device 2010 when capturing the video content in the data stream 2210.
  • the rotation correction applied is substantially 90°, for example in a range of 65° to 115°.
  • the angular signal is stored in the data memory 2030 for future use, together with the sequence of images 2220.
  • a gyroscopic sensor 2245 for example a Silicon micromachined vibrating structure rotation sensor, is included in the device 2010 and coupled to the computing hardware 2020 for providing a real-time angular orientation signal 2250 indicative of an instantaneous angular orientation of the device 2010 about substantially a central point 2260 within the device 2010; the real-time angular orientation signal 2250 is employed by the computing hardware 2020 under direction of the software application 2200 to adjust a position within the images 2220 from where the sub- portions 2230 are extracted, as illustrated in FIG.
  • the software application 2200 is operable to enable the device 2010 to take short clips 2255 of video content, for example short clips of defined duration D, for example in a range of 1 second to 20 seconds duration, more optionally in a range of 1 second to 10 seconds duration, and yet more optionally substantially 3 seconds duration.
  • short clips 2255 of video content for example short clips of defined duration D, for example in a range of 1 second to 20 seconds duration, more optionally in a range of 1 second to 10 seconds duration, and yet more optionally substantially 3 seconds duration.
  • each clip having a duration D, wherein each video clip is a self-contained data unit; for example, when the video clip is encoded via MPEG, each video clip would have a commencing intra frame (l-frame) with subsequent predicted frames (P-frame) and/or bidirectional frames (B-frame).
  • l-frame commencing intra frame
  • P-frame predicted frames
  • B-frame bidirectional frames
  • one or more additional l-frames are optionally included later in the given video clip.
  • the software application 2200 executing upon the device 2010 is optionally capable of supporting other types of image encoding alternatively, or in addition, to MPEG encoding, for example JPEG, JPEG2000, PNG, GIF, RLE, Huffman, DPCM and so forth.
  • the device 2010 is conveniently operated, when taking one or more aforesaid video clips, by the user holding the device 2010 in one of his/her hands, with the elongate axis 2110 in a substantially vertical direction, with the optical pixel array sensor 2060 pointing in a direction away from the user, with the touch-screen graphical user interface 2040 facing towards the user being provided, via execution of the aforesaid software application upon the computing hardware 2020, with an active area corresponding to a "record button", for example optionally shown as a region of the touch-screen graphical user interface 2040 presented in red colour.
  • the device 2010 then optionally operates such that:
  • depressing the record button for a short instance causes the device to capture one video clip of duration D, for example substantially 3 seconds duration; and (b) maintaining the record button depressed continuously causes the device 2010 to capture a temporally concatenated sequence of video clips of duration D, for example substantially 3 seconds duration, until the user ceases to depress the record button.
  • Such a manner of operation encourages the user to take short series of video clips of subject matter which is specifically of interest. Moreover, it also encourages the user to take short single clips of events.
  • the device 2010 By enabling the device 2010 to capture images and/or video clips with the device 2010 in an orientation with its elongate axis in substantially vertical direction renders the user and the device 2010 in a posture for undertaking a customary telephone conversation; this enables the user to capture video content in an unobtrusive manner, whilst appearing to be undertaking a telephone conversation, thereby enabling scenes to be captured by the device 2010 in a less imposing and natural manner and potentially resulting in more interesting video content being generated.
  • the aforesaid video clips recorded in the data memory 2030 soon occupy considerable memory capacity therein, especially if the user elects to keep both data corresponding to the data stream 2210 as well as data corresponding to the rotation- corrected sub-images 2240.
  • the data stream 2210 is recorded in the data memory 2030 together with rotational angle data pertaining to the device 2010 at a time when the one or more video clips and/or still images were captured, with the rotation-corrected sub-images 2240 being subsequently generated after capture of the data stream 2210.
  • the record button functions in a manner akin to a conventional camera shutter button, namely an image is capture at an instance the record button is depressed by the user.
  • the data volume associated with one or more video clips stored in the data memory 2030 can become considerable, such that it is desirable for the software application 2200 when executed upon the computing hardware 2020 to provide the user with an opportunity to review the video clips to decide which to retain and which to discard.
  • the software application 2200 is arranged to cause the computing hardware 2020 to function in a radically different manner in comparison to known contemporary video content manipulation software employed in lap-top computer or desk-top computers.
  • the touch-screen graphical user interface 2040 is relatively small in area, for example 4 cm x 7 cm in spatial extent. People with poorer eyesight, for example more mature users, are often not able to distinguish fine detail on the touch-screen graphical user interface 2040, despite it being technically feasible to provide the interface 2040 with a high pixel resolution by microfabrication processes.
  • the user interface 2040 is capable of supporting, in conjunction with the software application 2200, tapping and swiping motions of the user's fingers for instruction input purposes to the computing hardware 2020. Such tapping and swiping motions are clearly distinguished from click-and-drag motions customarily employed in conventional video editing software executable upon lap-top and desktop personal computers.
  • the software application 2200 executed upon the computing hardware 2020 provides an editing mode of operation which the user employs with the user interface 2040 facing towards the user.
  • an elongate dimension of the user interface 2040 is arranged to be top-bottom, and a transverse dimension of the user interface is arranged be left-right as observed by the user.
  • the software application 2200 executing upon the computing hardware 2020 is operable to present a sequence of captured video clips as miniature icons 2310 along an axis 2320 from top to bottom.
  • the video clips shown by the icons 2310 are optionally presented in a temporal sequence in an order in which they were captured by the device 2010.
  • the user is able to scroll up and down the icons by way of a finger swiping motion applied to the user interface 2040.
  • the user is able to view a given video clip by tapping upon a corresponding icon 2310 displayed along the axis 2320.
  • the "best video clip bin” 2370 is spatially remote in the right-hand portion 2350 relative to the "trash bin” 2360 as illustrated in FIG. 17, with the one or more "moderate interest bins” 2380 interposed therebetween.
  • the user positions his/her finger or thumb over a given icon 2310 of a given video clip to be sorted and then swipes the icons 2310 into a bin desired by the user. For example, video clips that are not to be retained in the data memory 2030 are selected for deletion by the user swiping icons 2310 corresponding to the video clips towards the "trash bin" 2360.
  • video clips that are definitely to be retained in the data memory 2030 are selected for keeping by the user swiping icons 2310 corresponding to the video clips towards the "best video clip bin” 2370.
  • video clips that are to be retained, at least in a short term, in the data memory 2030 are selected for intermediate storage by the user swiping icons 310 corresponding to the video clips towards the one or more "moderate interest bins" 2380.
  • the more "moderate interest bins" 2380 are optionally susceptible to be given names chosen, and thus meaningful to, the user.
  • the user can invoke, for example by a finger or thumb tapping action, an "empty trash bin" icon on the user interface 2040 to delete the video clips sorted by the user into the "trash bin” 2360, to free space in the data memory 2030 for receiving future video clips.
  • the software application 2200 provides a secondary sorting function, wherein the user invokes a given primary bin from the right-hand portion 2350 by a finger or thumb tapping action. Such an action causes the user interface 2040 to switch to a secondary mode, wherein the video clips within the given primary bin appear along the axis 2320 in the left-hand portion 2300 of the user interface 2040.
  • one or more secondary bins 2400 are presented in the right-hand portion 2350, enabling the user by way of a swiping action as aforementioned to sort the contents of the given primary bin into one or more of the presented secondary bins 2400 presented in the right-hand portion 2350.
  • Tertiary and higher order sorting of video clips into tertiary and higher order bins via swiping actions executed by the given user on the user interface 2040 is optionally supported by the software application 2200 executing upon the computing hardware 2020 of the device 2010.
  • the software application 2200 executing upon the computing hardware 2020 is operable to enable the user to upload one or more video clips sorted into one or more bins to be retained by communicating data corresponding to the one or more video clips via wireless through the wireless communication interface 2050 to one or more remote proxy servers 2500.
  • the software application 2200 enables the device 2010 to be used to manipulate the uploaded data clips on the one or more remote proxy servers 2500, for example to assemble the uploaded video clips into a composite video creation to which additional sound effects, additional sound tracks, additional video effects can be added, prior to the composite video creation being broadcast via social media, for example via YouTube, Facebook or similar.
  • either the rotation-corrected sub-images 2240 or data corresponding to the data stream 2210 giving rise to the rotation-corrected sub-images 2240, or both, are uploaded; the user is optionally provided with an option which to choose depending upon whether or not there is a need for the user to revert back to the data corresponding to the original data stream 2210.
  • uploading the rotation- corrected sub-images 2240 only requires less transfer of data and is hence faster and/or less demanding in available wireless data communication capacity.
  • the invention is capable of providing numerous benefits to the user.
  • the software application 2200 executing upon the computing hardware 2020 of the device 2010 is operable to capture data from one or more sensors in a more convenient manner, thereafter to provide the user with an environment in which to perform various processing operations on the captured data despite the device 2010 having a relatively smaller graphical user interface 2040, and then communicated resulting processed data to one or more proxy servers 2500 via wireless or direct wire and/or optical fibre communication connection.
  • Such wire and/or optical fibre communication is beneficially achieved by way of the device 2010 communicating via near-field wireless communication to a communication node in close spatial proximity to the device 2010, where the communication node has a direct physical connection to a communication network, for example the Internet; such near-field wireless communication can, for example be performed, by the user, after capturing video clips, placing the device 2010 in close proximity with a lap-top computer or desk-top computer also equipped with near-field wireless communication, for example conforming to BlueTooth or similar communication standard.
  • Bluetooth is a registered trade mark.
  • the device 2010 is beneficially a contemporary wireless smart phone, for example a standard mass-produced item, which is adapted by executing the software application 2200 thereupon, to implement the present invention.
  • the software application 2200 is beneficially implemented as an "App” which can be downloaded from an "App store", namely external database, else provided preloaded into the smart phone at its initial purchase by the user.
  • the given user 1260 is provided with an instantaneous player experience, namely commencing with a short duration highlight, for example implemented as a GIF style clip having a duration in a range of 4 to 10 seconds, whereafter the player experience progresses to full high definition (HD).
  • an instantaneous player experience namely commencing with a short duration highlight, for example implemented as a GIF style clip having a duration in a range of 4 to 10 seconds, whereafter the player experience progresses to full high definition (HD).
  • HD high definition
  • such an experience is capable of being provided by sending a JPEG image wrapped as a Quicktime movie video file; "Quicktime" is a registered trademark.
  • the devices 100, 1110, 2010, for example implemented as one device are being employed to capture, namely shoot, a series of images, namely pictures, namely in a "burst mode"
  • the devices 100, 1110, 2010 optionally are operable to employ high resolution images as video key frames, and the devices 100, 1110, 2010 are then operable to add lower quality reference frames thereto, for example for building up a form of image movie sequence.
  • the devices 100, 1110, 2010 are being used by their respective users 1260 to watch a video clip 1330, a following sequence of user actions are beneficially performed in the devices 100, 1110, 2010:
  • the given user 1260 taps upon a presentation screen of the devices 100, 1110, 2010, for example taps the display 2040;
  • the software 2200 executing upon the devices 100, 110, 20 0 is operable to employ an analysis algorithm to measure how fast the given user 1260 reacts, namely the software 2200 determines user-reaction-times, based upon one or more of: user iteration in respect of a speed editor, in respect of a sound graph on a portion of video clip 1330, in respect of a scene change or other activity on a video clip 1330 being watched by the given user 1260;
  • the software 2200 executing upon the devices 100, 1110, 2010 is operable to interpret single taps applied by the given user 1260 as points of interest for creating a video edit when editing the given video clip 1330;
  • the software 2200 executing upon the devices 100, 1110, 2010 is optionally operable to suggest areas of interest in the video clip 1330 that the given user can jump to during editing.
  • the software 2200 is optionally operable to provide:
  • the colour grading mechanism When implementing the colour grading mechanism, it is desirable that information that is used by for the devices 100, 1110, 2010 to grade images, namely pictures, according to their colour is beneficially stored in a single encrypted uncompressed image. Moreover, when implementing such a colour grading mechanism, when a given user 1260 selects a colour grade to apply to his/her images, the software 2200 causes the devices 100, 1110, 2010 to read a stripe of the images for determining information required by the software needed to apply a transformation to the images; in other words, a stripe-based colour assessment of images is employed in the system 1100 in conjunction with the devices 100, 1110, 2010.
  • a given image is used as a basis for the software 2200, for example in conjunction with the system 1100, to create a colour grade stripe. Thereafter, the given image is scanned to extract information therefrom indicative of all colour present in the given image. Next, the colours present are divided evenly to indicate a full range of different colour and shades present in the image. Thereafter, from this information, a stripe if created by the software 2200 for use in colourizing other video clips 1330.
  • Such a mode is beneficial, for example, when editing or creating other video clips 1330, for example for maintaining a colour scheme throughout a duration of one or more video clips 1330.
  • such a mode is beneficial when creating vintage video clips 1330, wherein a theme of vintage greyscale images is to be utilized throughout the video clips 1330.
  • the colour stripe editing tool is beneficially provided by the software 1100 and/or the system 1100, wherein the colour strip tool can be used to create new colour grading stripes for use when processing images.
  • a video clip 1330 can be added, whereafter the system 1100 executes a short analysis and thereafter makes recommendations to the given user 1260 regarding which types of colour grades would be most suitable to a given vide clip 1330.
  • the system 1100 beneficially analyzes the uploaded long video for determining at least one: audio variation in the long video, colour variation in the long video, brightness variation in the long video, scene change timing in the long video, movement information present in the long video (for example, by vector analysis in a compressed H.264 video format domain), facial information in the long video, geographic metadata present in the long video, timing metadata in the long video.
  • algorithms executing in the system 1100 and/or on the device devices 100, 1110, 2010 is operable to split, for example, l-frames present in the long video to create segments of interesting video, namely subdivides the long video into a plurality of smaller video clips 1330, that the given user 1260 is able to manipulate and incorporate into their video compilations.
  • the segments of interesting video are recompressed and made to conform to a standardized mezzanine format within the system 1100.
  • bytes are extracted and copied from the metadata without compression, thereby enabling searching for segments of video to be executed within the system 1100 based upon metadata searching.
  • a method of creating video templates for use when automatically editing a given user's 1260 video clips 1330 in a style of their favourite movie, short film, music video or similar In order to implement such a method, information regarding the structure and features of the favourite movie, short film, music video or similar are determined, and then the information is applied to the given user's 1260 video clips, namely in a manner as will now be described.
  • the system 1100 includes a video indexing system that is operable to analyze a video, for example the given user's 1260 favourite video, and to create a list of scene changes and video cuts based on:
  • the video indexing system is operable to scan locally available videos of the system 1100 and perform thereon, in respect of time, a domain-based perceptual hashing and analysis of the video as aforementioned.
  • the video indexing system is operable to play a video back from the Internet (World Wide Web: www) through a standard contemporary web browser interface and then analyze the video in real time while identifying associations with the one or more external databases.
  • Such a video indexing system functionality for the system 1100 provides the given user 1260 with rapid access to video content and audio content which, for example, can be appropriately compiled together with the given user's 1260 video clips 1330 for generate a composite video composition, for example as aforementioned in respect of video clips 1330A, 1330B to generate the video clip 1330C, as illustrated in FIG. 10.
  • the video indexing system employs an application program interface (API) which allows third parties to build a video scanning system for their own websites and/or to execute on their local computing system to assist to build a list of video cut styles for popular movies and TV programmes; such information is then beneficially employed by the given user 1260 when manipulating his/her own video clips to generate longer duration video content in a style of well known video films and such like.
  • API application program interface
  • the system 1100 is thus beneficially capable of providing the given user 1260 with a method of using a film database, for example as established by the aforesaid video indexing system, to recreate a video cut style by implementing following steps:
  • step (c) if a video in step (b) is not found, proceeding to search the Internet to find a matching music video and thereafter scheduling an analysis pass as aforemented;
  • a TV programme video template style is employed by the given user 1260; an associated method includes steps of:
  • the system 1100 employs a method including:
  • the system 1100 beneficially guides the given user 1260 in one or more ways as follows:
  • the given user 1260 is instructed, via a simple mechanism, to sort the videos according to cuts that are needed, namely in a manual or semi-manual manner.
  • the system 1100 provides the given user 1260 with a simple storyboarding tool that presents to the given user 1260 what types of shots, for example video clips 1330, are needed to recreate a film.
  • a method of selecting video content, editing video content, sorting videos and asynchronously updating associated metadata accessible to the system 1100 over low-bandwidth communication connections is thus operable to provide an editing platform, for example as aforementioned.
  • the system 1100 is operable to allow the given user 1260 to add metadata to a local or remote video via an associated still picture image.
  • video files are large, for example several hundred Mbytes or even several Gbytes in size, and are thus difficult to sort via low bandwidth communication networks, for example linking the device 100, 1110, 2010 to its associated server arrangement of the system 1100 as aforementioned.
  • the example embodiment provides a simple method for the given user 1260 to sort, edit, order, request, output and add metadata to video files using only still picture frames, wherein such still picture frames are easier to handle via low-bandwidth communication networks.
  • single frames of video content are outputted at specific times from a beginning of a video that contains metadata; such metadata includes, for example, name, type, format, date, hash sum,. UUID, location, geo-location and so forth.
  • first frames are beneficially outputted and saved, and thereafter video images are extracted at predefined intervals, for example predefined regular intervals; there is thereby generated an image file including of still images.
  • the method then progresses to use the image file for bi-direction metadata transport, wherein metadata is beneficially embedded in respect of the still images using software tools such as EXIF, XMP and IPTC; " ⁇ , "XMP' and “/PTC” are trademarks.
  • metadata is beneficially embedded in respect of the still images using software tools such as EXIF, XMP and IPTC; " ⁇ , "XMP' and “/PTC” are trademarks.
  • metadata there are three forms of metadata that thereby co-exist embedded into the data file including the still imagers, in one or more suitable formats.
  • the three forms of metadata optionally include:
  • Persistent metadata such persistent metadata optionally includes technical metadata concerning where and when the video was shot, format of the video, resolution of the video, UUID of the video, and so forth;
  • User/owner metadata such user/owner metadata optionally includes any metadata that has been added to the video to enhance its discovery when searches are implemented and when categorization activities are undertaken within the system 1100.
  • the user/owner metadata includes metadata indicative of how the video, for example a video clip thereof, has been used and by whom;
  • Social/collaboration metadata such social/collaboration metadata optionally includes data added to video files by the given user 1260, and/or other users, in a form of comments of feedback that relate to subject matter of the video.
  • Types 1 for automatically A series of image files, for example thumbnail images, creating a server-side that have been created from a single piece of video are request for a segment of requested via a simple web or mobile web browser a video, wherein the interface.
  • the given user 1260 sorts the series of image segment is defined by files in a simple image organisation software application two points in time in a such as Google Picasa; "Google” and “Picasa” are timeline of the video. trademarks.
  • a server side request is optionally made for a stream of video to be downloaded to be created by e-mailing, for example, two thumbnail images to a secret user-specific e-mail address; for example, for a minute-long-duration video, seven thumbnails, for example at 10 second intervals, are created, wherein the first thumbnail is a first frame of the video and a last thumbnail is a last image of the video.
  • a server supporting the system 1100 will return a segment of video having a duration of 20 seconds with a start point corresponding to a time 00:00:20.000 and an end point corresponding to a time 00:00:50.000.
  • Type 2 for social As part of a campaign, for example, thumbnails, namely feedback related to a single frame of a video, are automatically posted to one specific frame of a video. or more social media web-sites.
  • a tracking system would know locations of image files and track corresponding user feedback, for example likes and comments. Data corresponding to such user feedback is beneficially appended a source image file for later user, for example a complementary metadata to assist searching operations executed within the system 1100.
  • Type 3 for sorting and A given user 1260 employs a simple image ordering via use of management tool to sort and order thumbnail images, thumbnail images. for example using one or more of the devices 100, 1110,
  • Selections made by the given user 1260 are then uploaded to a FrameBlast backend of the system 1100 which, in return, creates a simple edit project that can be opened on-line at a later time.
  • Another example embodiment of the invention concerns the system 1100 employing a method of storing metadata needed to recreate en edited video from any device, for example the devices 100, 1110, 2010, using metadata relating to source material used, wherein the metadata is embedded invisibly into media files such as still images, video and even audio files.
  • the method is also optionally able to embed an EDL as a sonic fingerprint that is part of an audio track.
  • the given user 1260 edits source videos to combine them together to generate a composite video, wherein the source videos are optionally recorded via use of a mobile device, streamed from one or more external servers, and/or are downloaded from one or more external servers; in FIG. 10 combining two video clips 1330A, 1330B to generate a composite video clip 1330C is shown as an example.
  • a list of source videos used to create the composite video, namely output, is commonly known as an Edit Decision List, or "EDL" as aforementioned.
  • the EDL is used by a video rendering engine of the system 1100 to create the aforesaid composite video.
  • the method includes:
  • a mobile video editing system for example a proprietary iOS-based and Android-based system hosted by the system 1100, to take a selection of videos, to select portions of the these videos to order multiple videos into a seamless playlist, namely an EDL, to previewing the multiple videos as if it were one seamless file, and to render therefrom a new video file;
  • system 1100 via the given user 1260 not needing to know any specific project file, but rather only access to embedded metadata.
  • the system 1100 When the system 1100 is given a composite video file, it thus first scans a video container for the custom "atom" of the composite video file, reads therefrom the playlist, namely EDL, and then searches locally within the system 1100 or remotely in respect of the system 1100 for corresponding source video and metadata files, and thereafter opens these in a video editor for remixing purposes.
  • the source video files are provided with corresponding UUID which is embedded into headers of video containers of the source video files to enable them to be found, even if video file names of the source video files have, for example been changed by the given user 1260.
  • a hashing system is employed when implementing the method, for example a perceptive- or data-based hashing system, is employed for quickly narrowing down searches for videos stored locally and/or remotely in respect of the system 1100.
  • each EDL contains a local source filename for a given video together with a shortened URL identifier that allows the video to be tracked and found at multiple locations on the Internet (www) using a DNS-type dynamic tracking system; such a feature is useful for detecting unauthorised use of video in violation, for example, of copyright and/or licensing arrangement for generating revenues for proprietors of the video.
  • the system is optionally operable to check for still images associated with the video and reads corresponding EXIF, XMP or ITCP metadata for the presence of the EDL; if such detection fails, the system is optionally operable to check audio tracks of the video for high frequency audio content which has the EDF encoded into it.
  • the system is operable to segment the composite video file into its component video parts and load them for re-editing; however, transforms applied when generating the composite video often modify or strip data original present in the component video parts.
  • video content is beneficially temporally auto-stretched to fit specific edits desired by the given user 1260.
  • the system 1100 beneficially hosts a fraud protection algorithm that enables content usage tracking to be achieved, which beneficially synergistically is also employed to predict what kind of content is preferred by the given user 1260;
  • the fraud protection algorithm is operable to infer what the given user 1260 may like by correlating to what other users of the system 1100 like that have downloaded similar videos, for example where similarity of determined, at least in part, by metadata correlation;
  • the fraud protection system is operable to employ use tags, channels, algorithm types and effects usage as input for use in tracking dissemination of video files;
  • the fraud protection algorithm is operable to employ Eigenvector analysis to find principal routes by which a given video file is disseminated and used, namely by analyzing for principal links.
  • the system 1100 is optionally also operable to provide real-time rotoscoping.
  • Such rotoscoping involves server-side conversion of videos for use in application green screening.
  • such rotoscoping optionally involves mixing music band video with local video, for example in a manner as aforementioned, generating auto green screens.
  • such rotoscoping optionally involves manual rotoscoping of multiple clips with a music background which is common thereto; optionally, such manual rotoscoping is outsourced in respect of operation of the system 1100.
  • the system 1100 is optionally also operable to provide an audio re-synchronizing functionality, for example as described in the foregoing, for example for audio "re- syncing" of multiple clips with background music which is common thereto.
  • audio "re-syncing" is beneficially when the given user 1260 is desirous to create composite performances, for example in a manner to vintage sound-on-sound overlay achieved using magnetic tape audio recorders ("reel-to-reel tape recorders").
  • the system 1100 is optionally operable to employ at least one proxy server to record all user inputs and outputs, and this video and/or audio file manipulation and subsequent dissemination from the system 1100, for example to other users.

Abstract

A video clip editing system (100, 160, 200) employs a mobile telephone (100) including computing hardware (110) coupled to data memory (120), to a touchscreen graphical user interface (130), and to a wireless communication interface (140), wherein the computing hardware (110) is operable to execute one or more software applications (200) stored in the data memory (120). The one or more software applications (200) are operable when executed on the computing hardware (110) to provide an editing environment on the touch-screen graphical user interface (130) for editing video clips (410, 510) by user swiping-type instructions entered at the touch-screen graphical user interface (130) for generating a composite video creation, wherein a timeline (400) for icons (410) representative of video clips (410) is presented as a scrollable line feature on the touch-screen graphical user interface (130). Icons (510) of one or more video clips (510) for inclusion into the timeline (400) are presented adjacent to the timeline (400) on the touch-screen graphical user interface (130), such that video clips corresponding to the icons (510) are incorporated onto the timeline (400) by the user employing swiping-type instructions entered at the touch-screen graphical user interface (130) for generating the composite video creation.

Description

SYSTEM FOR VIDEO CLIPS
Technical Field
The present disclosure relates to systems for video clips, for example to systems for capturing, editing and distributing video clips; the present disclosure also concerns methods of operating aforesaid systems. The present invention also relates to media distribution systems, for example to media distribution system which are operable to distribute short video clips; the present invention also concerns methods of distributing media, for example to methods of distributing media including short video clips. Moreover, the present invention relates to camera apparatus implemented using mobile wireless devices, for example using smart wireless telephones, equipped with at least one optical imaging sensor; the invention is also concerned with methods of implementing a camera apparatus using a mobile wireless device, for example a smart wireless telephone, equipped with at least one optical imaging sensor. Furthermore, the present invention relates to software products recorded on non-transient machine-readable data storage media, wherein the software products are executable upon computing hardware for implementing aforesaid methods, for example at least in part via computing hardware on a mobile wireless device.
Background
Software products for editing video clips and still pictures to generate video creations, for example for uploading to popular media sites such as YouTube, Facebook and similar ("YouTube" and "Facebook' are registered trade marks), are well known and are executable, as illustrated in FIG. 1 , upon a lap-top computer and/or a desk-top computer 10, namely a personal computer (PC), with a graphical display 12 of considerable screen area, for example of 19 inch (circa 50 cm) diagonal screen size, and appreciable data memory capacity, for example 4 Gbytes of data memory capacity, for storing video clips and still pictures. Moreover, the computer 10 includes a high-precision pointing device 14, for example a mouse-type pointing device or a tracker ball-type pointing device. By employing such a high-precision pointing device 14, a given user is able to manipulate icons 16 corresponding to video clips and/or still pictures presented to the given user along a horizontal timeline 18, to control a sequence in which the video clips and/or still pictures are presented when replayed as part of a composite video creation. The given user is also provided with various options presented on the graphical display 12 for adding visual effects, as well as overlaying sound tracks, for example proprietary commercial sound tracks and/or user sound tracks which the given user has stored in the data memory of the computer 10. The high-precision pointing device 14 and the graphical display 12 of considerable screen area provide a convenient environment in which the given user is capable of making fine adjustments when editing the composite video creation to a completed state for release, for example, via aforementioned popular media sites.
Mobile wireless communication devices, for example mobile telephones, namely referred to "cell phones" in the USA, first came into widespread use during the 1980's. These earlier wireless communication devices provided relatively simple user interfaces including a keyboard for dialling, and a simple display to provide visual confirmation of dialled numbers as well as simple messages, for example short messaging system (SMS) information. Since the 1980's, mobile wireless communication devices have evolved to become more physically compact, and to be equipped with more processing power and larger data memory. Contemporary mobile communication devices are distinguished from personal computers (PCs) by being of a relatively smaller physical size which will fit conveniently into a jacket pocket or small handbag, for example in an order of 10 cm long, 4 cm broad and 0.5 cm to 1 cm thick.
In comparison to early mobile wireless communication devices, for example mobile telephones which first became popular in the 1980's, contemporary mobile wireless communication devices, for example "smart phones", have become computationally so powerful that diverse software applications, known as "Apps", can be downloaded via wireless communication to the contemporary devices for execution thereupon. Conveniently, the Apps are stored on an external database, for example known as an "App store". Users of contemporary wireless communication devices are, for example, able to download various Apps from the App store in return for paying a fee. When executed upon computing hardware of the contemporary wireless communication devices, the Apps are capable of communicating data back and forth between the mobile wireless communication devices and other such devices and/or external databases.
A problem encountered with known contemporary mobile communication devices, for example smart telephones, is that their graphical user interfaces (GUI) are contemporary implemented by way of touch-screens of relatively small area which potentially have high pixel resolution but poor pointer-control resolution by way of user finger contact or pointing pen contact onto the touch-screens. As a consequence, it is found extremely difficult for users, especially when their eyesight is impaired and/or their finger dexterity is lacking, for example users of mature age, to download contemporary software applications onto their smart telephones and use the software applications in a manner described in the foregoing for generating composite video compositions. In consequence, users are able to use their smart telephones to capture video clips and/or still pictures, but must then subsequently use a laptop computer and/or desktop computer to edit the captured video clips and/or still pictures to generate composite video creations. Such a process is laborious, frustrating and time consuming for the users.
During the past thirty years, personal computers (PC) have evolved to an extent that they can be coupled up to communication networks, for example to the Internet, which has enabled content, for example video content, to be shared by users throughout such communication networks. In parallel with this evolution of personal computers, mobile telephones have evolved to be equipped with cameras, together with more computing power and data memory, such that these mobile telephones can be used by a given user to share still pictures and video content with other mobile telephone users. Many contemporary mobile telephones are Internet- enabled, such that they can be employed to surf the Internet.
Internet platforms such as Facebook and other social networking sites have enabled given users 20 which are amateur video producers to upload their video content for viewing by other users 22; "Facebook" is a registered trade mark. Referring to FIG.2, operators 30 of such platforms 40 have generated income 50 by including advertisements 60 on their platforms 40 which are presented to the other users 22 when they search and view video content of their preference. However, such platforms 40 are not adapted to reward financially the given users 20 who expend effort to generate video content and hence rely on the voluntary efforts of such given users 20 to generate video content. Moreover, many users of aforesaid platforms 40 find advertisements obtrusive and deleterious to enjoyment of video content.
In comparison to early mobile wireless communication devices, for example mobile telephones which first became popular in the 1980's, contemporary mobile wireless communication devices, for example "smart phones" from companies such as Nokia, Apple Corp. and Samsung, have become computationally so powerful that diverse software applications, known as "Apps", can be downloaded via wireless communication to the contemporary devices for execution thereupon. Conveniently, the Apps are stored on an external database, for example known as an "App store". Users of contemporary wireless communication devices are, for example, able to download various Apps from the App store in return for paying a fee. When executed upon computing hardware of the contemporary wireless communication devices, the Apps are capable of communicating data back and forth between the mobile wireless communication devices and other such devices and/or external databases.
In addition to being provided with greater computational power and more data memory capacity, contemporary mobile telephones have also tended to include various inbuilt sensors, for example at least one miniature camera, an accelerometer, a GPS receiver, a temperature sensor, a touch screen, in addition to a microphone and a loudspeaker required for oral telephonic activities. Example implementations of contemporary smart phones are described in published patent applications as provided in Table 1.
Known contemporary mobile wireless communication devices
Patent application no. Title
WO2012/088939A1 "Mobile phone and camera method thereof
(Huizhou TCL Mobile Communication Co. Ltd.)
WO2011/082332A1 "Methods and arrangements employing sensor- equipped phones"
(Digimarc Corp.) A problem encountered with known contemporary mobile communication devices is that they are not optimally configured for capturing video content, for example in manner which is convenient to communicate via wireless communication networks offering modest communication bandwidth and to store in limited data memory capacity of the devices. It is known that capturing video content is susceptible to generate large video data files. Although methods of data compression for video content are known, these methods do not properly address a manner in which the video content is generated. In the aforesaid published international PCT patent application no. WO2011/082332A1 , there are described improvements to smart phones and related sensor-equipped systems. There are elucidated improvements, for example a user can assist a smart phone in identifying what portion of imagery captured by a smart phone camera should be processed, or identifying what type of image processing should be executed.
In the aforesaid published international PCT patent application no.
WO21Q2/088939A1 , there is described a mobile phone and a processing method for use in the mobile phone. The mobile phone includes:
(i) a directional detection module for determining whether or not a shooting direction of the mobile phone is vertical; and
(ii) an image processing module for receiving an direction indicative signal from the direction detection module.
The image processing module rotates an image acquired by a camera of the mobile telephone when the shooting direction is vertical. By application of the aforesaid method, the image is rotated directly inside the mobile phone, thereby avoiding a need for the user to upload the image into a computer and then to rotate the image by 90° manually. Summary
The present invention seeks to provide a video clip editing system which is more convenient for users to employ, wherein the system is based upon users employing their wireless communication devices, for example their smart telephones including touch-screen graphical user interfaces, for controlling editing of video clips and/or still picture to generate corresponding composite creations, namely composite video compositions. Moreover, the present invention seeks to provide more convenient methods of operating a video clip editing system, wherein the methods are based upon users employing their wireless communication devices, for example their smart telephones including touch-screen graphical user interfaces, for controlling editing of video clips and/or still picture to generate corresponding composite video creation, namely composite video compositions.
Furthermore, the present invention seeks to provide a software application which is executable upon computing hardware of a contemporary smart mobile telephone for adapting the smart mobile telephone technically to function in a manner which is more convenient when editing video content to generate corresponding composite video creations.
The present invention also seeks to provide a media distribution system, also known as a media distribution platform, which is operable to reward more effectively given users who generate video content.
Moreover, the present invention seeks to provide a method of distributing media in a form of video content, which rewards more effectively given users who generate video content.
The present invention also seeks to provide a camera apparatus, for example implemented using a contemporary mobile telephone, which provides for more convenient capture of video content in a form which is readily susceptible to being communicated by wireless.
Moreover, the present invention seeks to provide a method of capturing video content which is more convenient for users, for example when using a contemporary mobile telephone. Furthermore, the present invention seeks to provide a software application which is executable upon computing hardware of a contemporary mobile telephone for adapting the mobile telephone technically to function in a manner which is more convenient for capturing video content.
According to a first aspect of the present invention, there is provided a video clip editing system as defined in appended claim 1 : there is provided a video clip editing system employing a mobile telephone including computing hardware coupled to data memory, to a touch-screen graphical user interface, and to a wireless communication interface, wherein the computing hardware is operable to execute one or more software applications stored in the data memory, characterized in that the one or more software applications are operable when executed on the computing hardware to provide an editing environment on the touch-screen graphical user interface for editing video clips by user swiping-type instructions entered at the touch-screen graphical user interface to generate a composite video creation, wherein a timeline for icons representative of video clips is presented as a scrollable line feature on the touch-screen graphical user interface, and icons of one or more video clips for inclusion into the timeline are presented adjacent to the timeline on the touch-screen graphical user interface, such that video clips corresponding to the icons are incorporated onto the timeline by the user employing swiping-type instructions entered at the touch-screen graphical user interface for generating the composite video creation.
The invention is of advantage in that executing one or more software applications on computing hardware creates an environment enabling swiping-motion inclusion of one or more video clips onto a timeline for generating a composite video creation.
Optionally, for the video clip editing system, the mobile telephone is operable to be coupled in communication with one or more external databases via the wireless communication interface, and manipulation of video clips represented by the icons is executed, at least in part, by proxy control directed by the user from the touch-screen graphical user interface. Optionally, for the video clip editing system, the one or more software applications when executed upon the computing hardware enable one or more sound tracks to be added to one or more video clips, wherein a duration adjustment of the one or more sound tracks and/or the one or more video clips is executed automatically by the one or more software applications. More optionally, for the video clip editing system, the one or more sound tracks are adjusted in duration without causing a corresponding shift of pitch of tones present in the sound tracks. More optionally, for the video clip editing system, the one or more software applications executing upon the computing hardware are operable to cause the one or more video clips to be adjusted in duration by adding and/or subtracting one or more image frames from the one or more video clips. Yet more optionally, for the video clip editing system, the one or more software applications executing upon the computing hardware synthesize a new header or start frame of a video clip when a beginning part of the video clip is subtracted during editing.
Optionally, for the video clip editing system, the one or more software applications executing upon the computing hardware are operable to provide a selection of one or more video clips for inclusion into the timeline presented adjacent to the timeline on the touch-screen graphical user interface, wherein the selection is based upon at least one of:
(a) temporally mutually substantially similar temporal capture times of the video clips;
(b) mutually similar subject matter content determined by analysis of the video clips or of corresponding metadata; and
(c) mutually similar geographic location at which the video clips were captured.
According to a second aspect of the invention, there is provided a method of editing video clips by employing a mobile telephone including computing hardware coupled to data memory, to a touch-screen graphical user interface, and to a wireless communication interface, wherein the computing hardware is operable to execute one or more software applications stored in the data memory, characterized in that the method includes:
(a) executing the one or more software applications on the computing hardware for providing an editing environment on the touch-screen graphical user interface for editing video clips by user swiping-type instructions entered at the touch-screen graphical user interface to generate a composite video creation;
(b) generating a timeline for icons representative of video clips as a scrollable line feature on the touch-screen graphical user interface;
(c) generating icons of one or more video clips for inclusion into the timeline adjacent to the timeline on the touch-screen graphical user interface; and
(d) incorporating video clips corresponding to the icons onto the timeline by the user employing swiping-type instructions entered at the touch-screen graphical user interface for generating the composite video creation.
Optionally, the method further includes operating the mobile telephone to be coupled in communication with one or more external databases via the wireless communication interface, and manipulating video clips represented by the icons, at least in part, by proxy control directed by the user from the touch-screen graphical user interface.
Optionally, the method includes enabling, by way of the one or more software applications executing upon the computing hardware, one or more sound tracks to be added to one or more video clips, wherein a duration adjustment of the one or more sound tracks and/or the one or more video clips is executed automatically by the one or more software applications. More optionally, the method includes adjusting a duration of the one or more sound tracks without causing a corresponding shift of pitch of tones present in the sound tracks. More optionally, the method includes executing the one or more software applications upon the computing hardware to cause the one or more video clips to be adjusted in duration by adding and/or subtracting one or more image frames from the one or more video clips. More optionally, the method includes executing the one or more software applications upon the computing hardware to synthesize a new header or start frame of a video clip when a beginning part of the video clip is subtracted during editing.
Optionally, the method includes executing the one or more software applications upon the computing hardware to provide a selection of one or more video clips for inclusion into the timeline presented adjacent to the timeline on the touch-screen graphical user interface, wherein the selection is based upon at least one of: (a) temporally mutually substantially similar temporal capture time of the video clips;
(b) mutually similar subject matter content determined by analysis of the video clips or of corresponding metadata; and
(c) mutually similar geographic location at which the video clips were captured.
According to a third aspect of the invention, there is provided a software application stored in machine-readable data storage media, characterized in that the software applications is executable upon computing hardware for implementing a method pursuant to the second aspect of the invention.
Optionally, the software application is downloadable as a software application from an external database to a mobile telephone for implementing the method. According to a fourth aspect of the present invention, there is provided a media distribution system including one or more databases coupled via a communication network to users, wherein the system provides for a subset of the users to upload video content for distribution via the system to other users, characterized in that:
(a) the system includes a reviewing arrangement for receiving the uploaded video content provided to the system and for generating corresponding recommendations which determine an extent to which the video content is disseminated through the system to other users; and
(b) the system includes a reward arrangement for rewarding users who have uploaded the video content to the system as a function of the recommendations and an extent of distribution of the video content.
The invention is of advantage in that the system is more rewarding for given users of the system to employ when distributing their video content. Optionally, the media distribution system is implemented such that the reviewing arrangement includes users who belong to at least one special interest group accommodated by the system, and the system includes an arrangement for directing the uploaded video content to the at least special interest group based on subject matter included in the uploaded video content. Optionally, the media distribution system is implemented such that the system is operable to generate advertisement content for presenting to a given user, wherein the advertisement content comprises an advertiser's video content combined with video content provided by the given user, or by at least one special interest group to which the given user belongs, wherein the advertiser's video content includes at least one video template into which the video content provided by the given user, or by at least one special interest group to which the given user belongs, is inserted, thereby personalizing the advertisement content to the given user.
Optionally, the media distribution system is implemented such that the system includes an arrangement for monitoring a dissemination of the video content to users and aggregating distribution results for generating distribution analyses. More optionally, the media distribution system is implemented such that the arrangement of monitoring the dissemination is operable to monitor dissemination of composite music clips and video clips and providing analysis data indicative of association of the music clips with video clips elected by users of the system.
According to a fifth aspect of the invention, there is provided a method of operating a media distribution system including one or more databases coupled via a communication network to users, wherein the system provides for a subset of the users to upload video content for distribution via the system to other users, characterized in that the method includes:
(a) using a reviewing arrangement of the system for receiving the uploaded video content provided to the system and for generating corresponding recommendations which determine an extent to which the video content is disseminated through the system to other users; and
(b) using a reward arrangement of the system for rewarding users who have uploaded the video content to the system as a function of the recommendations and an extent of distribution of the video content.
Optionally, the method includes arranging for the reviewing arrangement to include users who belong to at least one special interest group accommodated by the system, and arranging for the system to include an arrangement for directing the uploaded video content to the at least special interest group based on subject matter included in the uploaded video content.
Optionally, the method includes arranging for the system to generate advertisement content for presenting to a given user, wherein the advertisement content comprises an advertiser's video content combined with video content provided by the given user, or by at least one special interest group to which the given user belongs, wherein the advertiser's video content includes at least one video template into which the video content provided by the given user, or by at least one special interest group to which the given user belongs, is inserted, thereby personalizing the advertisement content to the given user.
Optionally, the method includes arranging for the system to include an arrangement for monitoring a dissemination of the video content to users and aggregating distribution results for generating distribution analyses. More optionally, the arrangement of monitoring the dissemination is operable to monitor dissemination of composite music clips and video clips and providing analysts data indicative of association of the music clips with video clips elected by users of the system. According to a sixth aspect of the invention, there is provided a software product recorded on machine-readable data storage media, characterized in that the software product is executable upon computing hardware for executing a method pursuant to the fifth aspect of the invention. According to a seventh aspect of the present invention, there is provided a camera apparatus including a wireless communication device incorporating computing hardware coupled to a data memory, to a wireless communication interface for communicating data from and to the wireless communication device, to a graphical user interface for receiving user input, and to an optical imaging sensor for receiving captured image data therefrom, wherein the computing hardware is operable to execute one or more software applications for enabling the optical imaging sensor to capture one or more images, and for storing corresponding image data in the data memory and/or for communicating the corresponding image data from the wireless communication device via its wireless communication interface, wherein the wireless communication device has an elongate external enclosure having a longest dimension (L) defining a direction of a corresponding elongate axis for the wireless communication device, characterized in that
(a) the one or more software applications are operable to enable the wireless communication device to capture images when the wireless communication device is operated by its user such that the elongate axis is orientated in substantially an upward direction, wherein the one or more software applications are operable to cause the computing hardware to select sub- portions of captured images provided from the optical imaging sensor and to generate corresponding rotated versions of the selected sub-portions to generate image data for storing in the data memory and/or for communicating via the wireless communication interface; and
(b) the one or more software applications are operable to enable the wireless communication device to capture the one or more images as one or more video clips in response to the user providing tactile input at an active region the graphical user interface, wherein each video clip is of short duration (D) and is a self-contained temporal sequence of images.
The invention is of advantage in that the camera apparatus is more convenient to employ on account of its substantially vertical operating orientation and its manner of operation to generate self-contained video clips of convenient duration (D) for subsequent processing.
By "substantially vertical", it is meant that the elongate axis is within 45° of vertical direction, more preferably is within 30° of vertical direction, and most preferably is within 20° of vertical direction.
Optionally, for the camera apparatus, the short duration (D) is in a range of 1 second to 20 seconds, more preferable in a range of 1 second to 10 seconds, and most preferable substantially 3 seconds.
Optionally, for the camera apparatus, the wireless communication device includes a sensor arrangement for sensing an angular orientation of the elongate axis of the wireless communication device and generating a corresponding angle indicative signal, and the one or more software applications are operable to cause the computing hardware to receive the angle indicative signal and to rotate the sub- portions of the captured images so that they appear when viewed to be upright and stable images.
Optionally, for the camera apparatus, the one or more software applications are operable when executed upon the computing hardware to present one or more icons representative of video clips upon the graphical user interface, and one or more icons representative of sorting bins into which the one or more icons representative of video clips are susceptible to being sorted, wherein sorting of the one or more icons representative of video clips into the one or more icons representative of sorting bins is invoked by a user swiping motion executed by a thumb or finger of the user on the user graphical interface, wherein a given icon representative of a corresponding video clip is defined at a beginning of the swiping motion and a destination sorting bin for the selected icon representative of a corresponding video clip is defined at an end of the swiping motion.
Optionally, for the camera apparatus, the one or more software applications executing upon the computing hardware are operable to cause the one or more icons representative of video clips upon the graphical user interface to be sorted to be presented in a scrollable array along a longest length dimension of the graphical user interface.
More optionally, for the camera apparatus, the one or more software applications executing upon the computing hardware are operable to cause the one or more icons representative of video clips upon the graphical user interface to be sorted to be presented in a spatial arrangement indicative of a time at which the video clips were captured by the optical imaging sensor. More optionally, for the camera apparatus, at least one of the one or more icons representative of sorting bins, into which the one or more icons representative of video clips are susceptible to being sorted, is a trash bin, wherein the computing hardware is operable is present the user with a graphical representation option for emptying the trash bin to cause data stored in the data memory corresponding to contents of the trash bin to be deleted for freeing data memory capacity of the data memory.
Optionally, for the camera apparatus, the one or more software applications are operable when executed upon the computing hardware to enable the wireless communication device to upload one or more video clips from the data memory to one or more remote proxy servers and to manipulate the one or more video clips uploaded to the one or more proxy servers via user instructions entered via the user graphical interface.
According to an eighth aspect of the invention, there is provided a method of implementing a camera apparatus using a wireless communication device incorporating computing hardware coupled to a data memory, to a wireless communication interface for communicating data from and to the wireless communication device, to a graphical user interface for receiving user input, and to an optical imaging sensor for receiving captured image data therefrom, wherein the computing hardware is operable to execute one or more software applications for enabling the optical imaging sensor to capture one or more images, and for storing corresponding image data in the data memory and/or for communicating the corresponding image data from the wireless communication device via its wireless communication interface, wherein the wireless communication device has an elongate external enclosure having a longest dimension (L) defining a direction of a corresponding elongate axis for the wireless communication device, characterized in that the method includes:
(a) employing the one or more software applications to enable the wireless communication device to capture images when the wireless communication device is operated by its user such that the elongate axis is orientated in substantially an upward direction, wherein the one or more software applications are employed to cause the computing hardware to select sub- portions of captured images provided from the optical imaging sensor and to generate corresponding rotated versions of the selected sub-portions to generate image data for storing in the data memory and/or for communicating via the wireless communication interface; and (b) employing the one or more software applications to enable the wireless communication device to capture the one or more images as one or more video clips in response to the user providing tactile input at an active region the graphical user interface, wherein each video clip is of short duration (D) and is a self-contained temporal sequence of images.
By "substantially vertical", it is meant that the elongate axis is within 45° of vertical direction, more preferably is within 30° of vertical direction, and most preferably is within 20° of vertical direction.
Optionally, for the method, the short duration (D) is in a range of 1 second to 20 seconds, more preferable in a range of 1 second to 10 seconds, and most preferable substantially 3 seconds. Other durations are optionally possible for the short duration (D).
Optionally, the method includes using a sensor arrangement of the wireless communication device for sensing an angular orientation of the elongate axis of the wireless communication device and generating a corresponding angle indicative signal, and employing the one or more software applications to cause the computing hardware to receive the angle indicative signal and to rotate the sub-portions of the captured images so that they appear when viewed to be upright and stable images.
Optionally, the method includes employing the one or more software applications when executed upon the computing hardware to present one or more icons representative of video clips upon the graphical user interface, and one or more icons representative of sorting bins into which the one or more icons representative of video clips are susceptible to being sorted, wherein sorting of the one or more icons representative of video clips into the one or more icons representative of sorting bins is invoked by a user swiping motion executed by a thumb or finger of the user on the user graphical interface, wherein a given icon representative of a corresponding video clip is defined at a beginning of the swiping motion and a destination sorting bin for the selected icon representative of a corresponding video clip is defined at an end of the swiping motion. More optionally, the method includes employing the one or more software applications executing upon the computing hardware to cause the one or more icons representative of video clips upon the graphical user interface to be sorted to be presented in a scrollable array along a longest length dimension of the graphical user interface.
More optionally, the method includes employing the one or more software applications executing upon the computing hardware to cause the one or more icons representative of video clips upon the graphical user interface to be sorted to be presented in a spatial arrangement indicative of a time at which the video clips were captured by the optical imaging sensor.
More optionally, the method includes employing the one or more software applications to cause the at least one of the one or more icons representative of sorting bins, into which the one or more icons representative of video clips are susceptible to being sorted, to be a trash bin, wherein the computing hardware is operable is present the user with a graphical representation option for emptying the trash bin to cause data stored in the data memory corresponding to contents of the trash bin to be deleted for freeing data memory capacity of the data memory.
Optionally, the method includes employing the one or more software applications when executed upon the computing hardware to enable the wireless communication device to upload one or more video clips from the data memory to one or more remote proxy servers and to manipulate the one or more video clips uploaded to the one or more proxy servers via user instructions entered via the user graphical interface.
According to a ninth aspect of the invention, there is provided a software product recorded on machine-readable data storage media, characterized in that the software product is executable upon computing hardware for implementing a method pursuant to the eighth aspect of the invention.
Optionally, the software product is downloadable from an App store to a wireless communication device including the computing hardware. It will be appreciated that features of the invention are susceptible to being combined in various combinations without departing from the scope of the invention as defined by the appended claims.
Description of the diagrams
Embodiments of the present invention will now be described, by way of example only, with reference to the following diagrams wherein:
FIG. 1 is an illustration of a contemporary laptop or desktop computer arranged to execute software products for providing a user environment for editing video clips and/or still pictures to generate corresponding composite video creations;
FIG. 2 is an illustration of a conventional platform for distributing video content; FIG. 3 is an illustration of a contemporary smart telephone which is operable to execute one or more software applications for implementing the present invention;
FIG. 4 is an illustration of an editing environment provided on the contemporary smart telephone of FIG. 3;
FIG. 5 is an illustration of timeline icons and transverse icons presented to a given user in the editing environment of FIG. 4;
FIG. 6 is an example of sound analysis employed in the smart telephone of FIG.
3;
FIG. 7 is an example of sound track editing performed without altering tonal pitch of the sound track;
FIG. 8A to FIG. 8D are illustrations of video editing which is implementable using the smart telephone of FIG. 3;
FIG. 9 is an illustration of a media distribution system pursuant to the present invention;
FIG. 10 is an illustration of advertisement content generation pursuant to the present invention;
FIG. 11 is an illustration of an application of the present invention in association with a social event, for example a sports event;
FIG. 12 is an illustration of a conventional contemporary mobile telephone
employed to capture still images and video content; FIG. 13 is an illustration of a contemporary mobile telephone and active elements included within the contemporary mobile telephone;
FIG. 14 is an illustration of a contemporary mobile telephone adapted to implement a camera apparatus pursuant to the present invention;
FIG. 15 is an illustration of image field manipulation adopted when implementing the present invention on the contemporary mobile telephone of FIG. 14; FIG. 16 is an illustration of image stabilization performed on the contemporary mobile telephone of FIG. 14;
FIG. 17 is an illustration of video clip sorting implemented on the contemporary mobile telephone of FIG. 14; and
FIG. 18 is an illustration of video clip uploading from the contemporary mobile telephone of FIG. 14 to an external proxy database.
In the accompanying diagrams, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non- underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing. Description of embodiments of the invention
In overview, referring to FIG. 3, the present invention is concerned with a wireless communication device 100, for example a contemporary smart telephone, which includes computing hardware 110 coupled to a data memory 120, to a touch-screen graphical user interface 130, and to a wireless communication interface 140. The wireless communication device 100 is operable to communicate via a cellular wireless telephone network 150, for example to one or more external databases 160. Moreover, the computing hardware 110 and its associated data memory 120 are of sufficient computing power to execute software applications 200, namely "Apps", downloaded to the wireless communication device 100 from the one or more external databases 160, for example from an "App store" thereat.
The wireless communication device 100 includes an exterior casing 250 which is compact and generally elongate in form, namely having a physical length dimension L to its spatial extent which is longer than its other width and thickness physical dimensions W, T respectively; an elongate axis 260 defines the length dimension L as illustrated. Moreover, in such contemporary wireless communication devices, it is customary for the devices to have substantially front and rear major planar surfaces 270, 280 respectively, wherein the front major surface 270 includes the touch-screen graphical user interface 130 and a microphone 290, and wherein the rear major surface 280 includes an optical imaging sensor 300, often referred to as being a "camera". When employed by a given user, the wireless communication device 100 is most conveniently employed in an orientation in which the elongate axis 260 is observed from top-to-bottom by the given user, for example such that the microphones 290 is beneath the touch-screen graphical user interface 130 when viewed by the given user.
A software application 200 for implementing the present invention is pre-loaded into the data memory 120 of the wireless communication device 100 and/or is downloaded from the one or more external database 160 onto the data memory 120 of the wireless communication device 100. The software application 200 is executable upon the computing hardware 110 to generate an environment for the given user to edit video clips and/or still pictures via the touch-screen graphical user interface 130, namely an environment which is convenient to employ by the given user, despite the limited size and pointing resolution of the graphical user interface 130, which functions in a manner which is radically different to that provided from known contemporary video editing software as aforementioned for use in laptop and desktop computers. An example user environment presented on the touch-screen graphical user interface 130 by execution of the software application 200 upon the computing hardware 110 will now be described in greater detail. Referring to FIG. 4, there is shown the touch-screen graphical user interface 130 in an orientation as viewed by the given user when executing editing activities pursuant to the present invention; the elongate axis 260 is conveniently orientated from top-to-bottom. The software application 200 executing upon the computing hardware 10 presents a time line 400 from top-to-bottom. This timeline 400 represents a temporal order in which video clips are assembled into a composite video creation. A series of icons 410 presented along the timeline 400 range from an icon to \(n), where there are n icons 410 corresponding to video clips to be accommodated in the composite creation; optionally, n is so large that not all icons 410 from to \(n) can be shown simultaneously on the touch-screen graphical user interface 130, requiring a swipe- scrolling action by the given user to examine and manipulate them as will be described later. Optionally, the integer n is initially user-defined; alternatively the given user can add as desired one or more additional icons 410 within the series of icons 410 as required, and given user can also subtract as desired one or more icons 410 from the series of icons 410 as required. By employing a directional finger or thumb swiping motion along the timeline 400 on the touch-screen graphical user interface 130, namely an upwardly-directed swipe or downwardly-directed swipe, the given user can move along the series of icons 410 on the touch-screen graphical user interface 130 to work on a given desired icon 410.
Referring next to FIG. 5, for a given icon 410 scrolled by the given user to align with a transverse axis 450, for example an icon where an integer / is in a range 1 to n, the software application 200 executing upon the computing hardware 110 is operable to cause a selection of video clips represented as icons 510 to appear which can be inserted by user-selection for inclusion to be represented by the icon The icons 510 are shown as a traverse series which are scrollable by way the given user performing a transverse finger or thumb swiping motion along the transverse axis 450 on the touch-screen graphical user interface 130. The icons 510 when scrolled are overlaid onto the icon l(/) on the touch-screen graphical user interface 130; the given user can incorporate the video clip corresponding the icon 510 overlaid onto the given icon by tapping the touch-screen graphical user interface 130 at the icon l( ), else depressing an "add" button area 520 provided along a side of the touchscreen graphical user interface 130. The given user progresses up and down the series of icon 410 until all desired video clips from the icons 510 are incorporated into the icons 410. Incorporation of user-selected icons 510 into the icons 410 as aforementioned causes corresponding movement or linking of video data corresponding to the icons 510. Such linking of video data can occur:
(a) directly in the wireless communication device 100, for example when all the video data corresponding to the icons 510 is present in the data memory 120; or (b) at the one or more external databases 160 by way of proxy control from the wireless communication device 100, when the video data corresponding to the video clips represented by the icons 510 is present at the one or more external databases 160.
When the video data corresponding to the icons 510 is present both within the data memory 120 and at the one or more external databases 160, manipulation of video data, for example uploading of video data from the wireless communication device 100 to the one or more external databases 160, is beneficially implemented when the given user has completed a session of editing along the timeline 400, thereby reducing a need to communicate large volumes of data via the cellular wireless telephone network 150., for example by way of the given user depressing an "execute edit " button area 530 of the touch-screen graphical user interface 130.
During manipulation of the icons 410, 510 as aforementioned, the given user can play corresponding video on the touch-screen graphical user interface 130 by tapping the icon 410, 510, alternatively places a desired icon to be played at an intersect of the timeline 400 and the axis 450 and then taps the touch-screen graphical user interface 130 at the insect, alternatively depressing an "play" button area 540 of the touch-screen graphical user interface 130. When the video data corresponding to the selected icon 410, 510 resides in the data memory 120, the computing hardware 110 merely plays a low-resolution version of the selected video content to remind the given user of the content of the video content; alternatively, when the video data corresponding to the selected icon 410, 510 resides in the one or more external databases 160, a low-resolution of the selected video content is optionally streamed to the wireless communication device 100 in real time from the one or more external databases 160.
The touch-screen graphical user interface 130 as seen in FIG. 4 and FIG. 5 may also optionally have an orientation in a horizontal mode where the wireless communication device 100 is rotated substantially 90 degrees clock wise or more preferably substantially 90 degrees counter clock wise. This may be useful for certain operations that allow the user to see a wider screen of the video and then allowed to return to the vertical mode if and when required. Futher there is also an option of having a means of video preview in the user interface in both the shown vertical mode of FIG. 4 and FIG. 5, and the described horizontal mode of operation. The video preview is overlayed or may be moved around in the user interface 130 to assist in the editing or for viewing of the partially or fully completed video.
From the foregoing, it will be appreciated that the software application 200 is capable of providing a high degree of automatic coupling of video clips together to generate the composite video creation. It enables the given user not only to capture video clips using his/her wireless communication device 100, but also enables the given user to compose complex composite video creations from his/her wireless communication device 100; such functionality is inadequately catered for using contemporarily available software applications.
By using artificial intelligence, the icons 510 presented along the transverse axis 450 are chosen by execution of the software application 200 to be in graded relevance, for example one or more of:
(a) a next video clip, or preceding video clip, in temporal capture sequence to video clips preceding or following the icon along the timeline 400, thus enabling the given user to arrange with ease the video clips along the timeline 400 in a temporal sequence, or reverse temporal sequence, in which they were originally captured;
(b) a next video clip of similar type of video content to video clips preceding or following the icon along the timeline 400, thus enabling the given user to maintain a given theme in the video clips along the timeline 400 when composing the composite video creation, for example a given video clip is a picture of the given user's child eating French ice cream, and a next video clip along the timeline 400 presented as an option along the transverse axis 450 is a video clip of the Eiffel Tower in Paris, for example derived from a common database of video clips maintained at the one or more external databases 160;
(c) a next video clip proposed along the transverse axis 450 is captured from a generally similar geographical area as pertaining to video clips preceding or following the icon l(/) along the timeline 400, for example determined by the video clips having associated therewith metadata including GPS and/or GPRS position data which can be searched for relevance; one or more sound tracks proposed along the transverse axis 450, for example one or more music tracks, to be added to the video clip selected by the given user for the icon l(/); the one or more sound tracks can be those captured by the given user, alternatively for example derived from a common database of sound tracks maintained at the one or more external databases 160; and
special effects to be added to the video content associated with the icon l(/), for example text bubbles, static exclamation symbols, animated exclamation symbols, geometric shapes to mask out certain portions of the video clip (for example for decency or anonymity reasons).
Combining video clips and additional sound tracks in respect of the icon is a non- trivial task, in view of the video clips being of temporally mutually different duration. The touch-screen graphical user interface 130 does not provide the given user with sufficient adjustment precision to try to edit the sound track or video clip, and hence the software application 200, for example with assistance of proxy software applications executing at the one or more external databases 160, is required to add sound to video clips in an automated manner which provides a seamless and professional result. Such addition is beneficially achieved using one or more of following techniques:
(i) F1 : by fading the sound track in and out towards a beginning and an end of the video clip respectively;
(ii) F2: by cutting the music track on a music beat, for example switching to the subsequent video clip along the timeline 400 is achieved at the music beat; and
(iii) F3: by temporally stretching and/or shrinking one or more of the video clip and the music track so that they mutually temporally match.
Options (ii) and (iii) require special data processing techniques which will now be elucidated in greater detail. In general, speeding up or slowing down an sound track, even by only a few percent, can alter radically an aesthetic impression of users to the music track, as tonal pitches in the sound track are corresponding shifted; in consequence, the present invention is susceptible to being implemented most simply by modifying the video clip itself, for example by insertion of duplicate video images into the video clip, or removal of video images from the video clips, or a combination of such insertion and removal of video images.
Beat analysis of a sound track will next be described with reference to FIG. 6. The software product 200, alternatively corresponding software executing upon the one or more external databases 160 and controlled by proxy from the wireless communication device 100, are operable to load a given sound track 600 to be analysed into data memory, for example into the data memory 120 or corresponding proxy memory at the one or more external databases 160. The sound track 600 is represented by a signal s(/) which has signal values s(1) to s(m) from its beginning to its end, wherein j and m are integers, and j represents temporal sample points in the signal s( ) and has a value in a range from 1 to m. The signal s(/) typically has many hundred thousand sample points to many millions of sample points, depending upon temporal duration of the signal s( ) from 1 to m. Optionally, the signal s(/) is a multichannel signal, for example a stereo signal. The signal s(y) is subjected to processing by the software application 200 executing upon the computing hardware 110, alternatively or additional by corresponding software applications at the one or more external databases 160 under proxy control of the wireless communication device 100, to apply temporal bandpass filtering denoted by 610 using digital recursive filters and/or a Fast Fourier Transform (FFT) to generate an instantaneous harmonic spectrum h(/, f) of the signal s(/) at each sample point j along the signal s(/), wherein h is an amplitude of a harmonic component and f is a frequency of the harmonic component as illustrated in FIG. 6. Certain instruments such as cymbals and bass drums defining beat generate a particular harmonic signature which occurs temporally repetitively in the harmonic spectrum h as a function of the integer j. For example, a period of the harmonic signature of the certain instruments defining beat can be determined by subjecting the harmonic spectrum h(y, ή, for a limited frequency range fi to corresponding to the harmonic signature of such instruments, to further recursive filtering and/or Fast Fourier Transform (FFT), denoted by 620, as a function of the integer j to find a duration of the beat, namely bar, from a peak in the spectrum generate by such analysis 620. When a duration of a bar in the music signal s(j) has been determined, the signal s(/) can then be cut by the software application 200 executing upon the computing hardware 110, alternative by proxy at the one or more external databases 160, to provide automatically an edited sound track which is cut cleanly at a beat or bar in the original music track represented by the signal s( ) typically. Such an analysis approach can also be used to loop back at least a portion of the sound track to extend its length, wherein loopback occurs precisely at a beat or bar-end in the music track.
Optionally, the analysis 610 also enables the music track 600 to be analysed whether or not it is beat music or slowly changing effects music, for example meditative organ music having long sustained tones, which is more amenable to fading pursuant to aforesaid technique F1.
Changing a speed of the sound track without changing its tonal pitch will next be described with reference to FIG. 7. The software product 200, alternatively corresponding software executing upon the one or more external databases 160 and controlled by proxy from the wireless communication device 100, are operable to load a sound track 700 to be analysed into data memory, for example into the data memory 120 or corresponding proxy memory at the one or more external databases 160. The sound track 700 is represented by a signal s(/) which has signal values s(i) to s(m) from its beginning to its end, wherein j and m are integers, and j represents temporal sample points in the signal s(j) and has a value from 1 to m. The signal s(j) typically has many hundred thousand sample points to many millions of sample points, depending upon temporal duration of the signal s(/) from 1 to m. Optionally, the signal s(j) is a multichannel signal, for example a stereo signal. The signal s(y) is subjected by the software application 200 executing upon the computing hardware 110, alternatively or additional by corresponding software applications at the one or more external databases 160 under proxy control of the wireless communication device 100, to apply temporal bandpass filtering denoted by 710 using digital recursive filters and/or a Fast Fourier Transform (FFT) to generate an instantaneous harmonic spectrum h(/, f) of the signal s(j) at each sample point j along the signal s(/), wherein h is an amplitude of a harmonic component and f is a frequency of the harmonic component as illustrated in FIG. 6. By representing the harmonic spectra (j, ή as corresponding temporal data spectrum '(dij, /)> wherein di is a temporal period between samples when sampling the sound track 700, a slowed-down or speeded up sound track is represented by h' cfe.y, f), wherein di and ck are mutually different. The duration d2 can be chosen so the sound track h"(cf2 ), when subject to an inverse Fast Fourier Transform (i-FFT), denoted by 720, is of similar duration to a video clip, or series of video clips, to which the sound track h"(cfc.y, f) is to be added. By such a technique F2, matching of temporal durations of sound tracks and one or more video clips can be matched for purposes of being mutually added together using the software application 200 and/or corresponding proxy software at the one or more external databases 160. Such a technique enables a speed of the music track 700 to be changed for editing purposes without altering pitch of tones present in the music track 700. Optionally, the software application 200 allows the given user to alter the tempo of the music track within a duration of the music track, for example to slow down the music track at a time corresponding to a particular event occurring in the video clip for artistic or dramatic effect to make the composite video creation more exciting or interesting for subsequent viewers therefore, for example when the composite video creation is shared over aforesaid social media, such slowing down or speeding up of tempo of the music track without altering the frequency of tones in the music track is not a feature provided in contemporary video editing software, even for lap-top and desk-top personal computers.
As an alternative, or addition, to editing automatically features of sound tracks, the software application 200 is capable of processing video clips to extend their length or shorten their length for rendering them compatible in duration with sound tracks, for removing irrelevant or undesirable video subject matter and similar. Referring to FIG. 8A to FIG. 8D, the software application 200, or corresponding software applications executing at the one or more external databases 160 under proxy control from the software application 200, when executed upon the computing hardware 110 are operable to enable a video clip 800 to be manipulated in data memory, for example in the data memory 120. The video clip 800 includes a header frame 810, for example an initial l-frame when in MPEG format, and a sequence thereafter of dependent frames, for example P-frames and/or B-frames when in MPEG format. When editing by shortening a beginning portion 820 of the video clip 800 as illustrated in FIG. 8A, a new header frame 830 is synthesized by the software application 200 or its proxy as aforementioned. When editing by extending a duration of the video clip 800, additional frames are added which cause the video clip 800 to replay more slowly, or momentarily pause, for example by adding one or more P-frames and/or B-frames 840 when in MPEG format; this is illustrated in FIG. 8B. Optionally, the added one or more P-frames and/or B-frames correspond to causing the video track 800 to loop back along at least a part of its sequence of images. When editing by shorting a duration the duration of the video clip 800, for example as illustrated in FIG. 8C, one or more frames 860 are removed from video clip 800 after its initial header 810, for example one or more B-frames or P-frames when in MPEG format, and remaining abutting frames either side of where the one or more frames have been removed are then amended to try to cause as smooth a transition as possible between the abutting frames; this is experienced when the video is replayed as a momentary visual jerking motion or sudden angular shift in a field of view of the video clip. As illustrated in FIG. 8D, the video clip 800 can also be extended using the software application 200 and/or corresponding software applications executing at the one or more external databases 160 under proxy control from the software application 200, by inserting supplementary subject matter 900, for example experienced when viewing the video clip as an still image relevant to the subject matter of the video clip 800; for example, the video clip 800 is taken along a famous street in Stockholm, and then a brief picture of Gamla Stan in Stockholm is briefly shown for extending a duration of the video clip 800. Optionally, the software application 200 selects the inserted subject matter 900 from metadata associated with the video clip 800, and/or by analysing the video clip 800 to find related subject matter, for example by employing neural network analysis or similar. The subject matter 900 is inserted into the video clip 800 by dividing the video clip 800 into two parts 800A, 800B, each with their own start frame, for example each with its own I -frame when implemented in MPEG, and then inserting the subject matter 900 as illustrated between the two parts 800A, 800B.
The software application 200 is thus capable of executing automatic editing of video clips and/or sound tracks so that they match together in a professional manner, wherein such automation is necessary because the a touch-screen graphical user interface 130 provides insufficient pointing manipulation accuracy and/or viewed visual resolution, especially when the given user has impaired eyesight, to enable precise manual editing operations to be performed. However, despite its sophisticated image and sound processing algorithms, the software application 200 and/or its proxy may not always achieve an aesthetically perfect edit; beneficially, along the transverse axis 430, the software application 200 is operable to present the given user with a range of aforementioned edits to match video clips and sound tracks together, for example generated using a random number generator to control aspects of the editing, for example where frames are added or removed, where a music track is cut at an end of a music bar selected, at least in part depending upon a random number, so that the given user can select amongst the proposed edits implemented automatically by the software applications 200 to select a best automatically generated edit. Optionally, the series of edits proposed by the software application 200 and/or its proxy, are filtered for highlighting types of edits which the software application 200 recognizes to be in a taste of the given user, for example based upon an analysis of earlier choices made by the given user when selecting amongst automatically suggested editing video clips and sound tracks, for example by way of neural network analysis of the given user's earlier choices. In other words, the software application 200 is capable of operating in an adaptive manner to the given user.
When the given user has completed generation of the composite video creation, stored at least in one of the data memory 120 and one or more external databases 160, the given user is able to employ the software application 200 executing upon the computing hardware 110 to send the composite video creation to a web-site for distribution to other users, and/or to a data store of the given user for archival purposes. The web-site for distribution can be, for example, a social media web-site, or a commercial database from which the composite video creation is licensed or sold to other uses in return for payment back to the given user. The present invention thereby enables the given user both to capture video clips and sound tracks using his/her wireless communication device 100, for example smart telephone, as well as using his/her wireless communication device 100 to edit the video clips and sound tracks to generate composite video creations for distribution, for example in return for payment. As a result, the present invention is pertinent, for example, to poorer parts of the World where the given user may be able to afford the wireless communication device 100, but cannot afford in addition a lap-top computer or desktop computer. By generating composite video creations using their smart telephones, such users from poorer parts of the World are able to become "film producers" and thereby vastly increase a choice of video content available around the World to the benefit of humanity as a whole. In overview, the present invention is also concerned with a media distribution system which operates in a different manner to contemporary Facebook and YouTube as represented in FIG. 2. Referring to FIG. 9, a media distribution system 1100 is especially well adapted for handling short video clips, for example substantially 3 second duration video clips generated from mobile wireless devices 1110, for example contemporary smart telephones, for example the aforementioned wireless communication device 100. Beneficially, the system 1100 includes, for example hosts, the aforesaid one or more databases 160. The devices 1110 each include computing hardware 1120 coupled to data memory 1130, to a wireless interface 1140, to a graphical user interface 1150, to a video camera 1160, and to a microphone 1170 and loudspeaker 1180. The media distribution system 1100 includes a communication network 1200 for receiving video content from the devices 1110 and also one or more databases 1210 for storing users' video content which has been uploaded from their devices 1110. However, the system 1100 is optionally operable to receive video content from other sources, for example from contemporary digital cameras whose video content is uploaded, for example via a personal computer (PC), through the Internet to the one or more databases 1210. The media distribution system 1100 operates in several ways which is radically different to the platform 40 of FIG. 2, namely:
(a) the system 1100 communicates the video content 1250 provided from one or more of the devices 1110 of one or more given users 1260 to a group of reviewing users 1270 who assess the communicated video content 1250 and make a recommendation 1280, for example a rating of the video content 1250, a decision YES/NO whether or not the video content should be used for a given purpose and so forth. Optionally, the group of reviewing users 1270 is implemented, at least in part, automatically using computing machine- intelligence, for example by way of neural networks implemented in computing hardware and software, and/or by rule rule-based analysis, for example employing one or more stages of Fourier analysis of sound signals, and spatial
Fourier analysis of images in video content;
(b) the system 1100 rewards the given users 1260 of the one or more devices
1110 which have provided the video content 1250 to the system 1100 with bonus points and/or financial payment depending upon the recommendation 1280 from (a) and/or a distribution to other users 1290 that the video content 1250 receives based upon the recommendation 1280;
(c) the system 1100 establishes special interest groups of users who have a mutually common interests, for example steam trains, classical pipe organs, stamp collecting, antiques, cycling, golf and so forth; each user is optionally a member of several different interest groups; optionally, the group of reviewing users 1270 in (a) are users in a special interest group reviewing video content whose subject matter pertains to the special interest group;
(d) the system 1100 is operable to provide the given users 1260 with video advertisements which at least one of: include as an integral part thereof one or more video clips generated by a given user 1260 to which the advertisements are presented, include as an integral part thereof one or more video clips generated by one or more users who are in a similar special interest group to a given user 1260 to which the advertisements are presented. The system 1100 is thus operable to generate video advertisements as illustrated in FIG. 10 including a template video advertisement 1300 including an identification 1310 of a product or service to be sold and a template 1320, wherein the template video advertisement 1300 includes the template 1320 into which one or more user video clips 1330 can be inserted, wherein the template 1320 is a subfield of the template video advertisement 1300 and the template video advertisement 1300 and the one or more video clips are played substantially concurrently to the given users 1260; and
(e) the system 1100 establishes special interest groups of users who have a mutually common interests, for example steam trains, classical pipe organs, stamp collecting, antiques, cycling, golf, music tracks, music songs and so forth as aforementioned; each user is optionally a member of several different interest groups; optionally, the special interest groups are determined automatically by performing an analysis of a manner in which the given users 1260 view, combine together or couple together one or more video clips 1330. For example, if a given user 1260 in a given interest group frequently views a substantially temporally constant set of video clips 1330, the system 1100 determines therefrom that there is an increased probability that the substantially temporally constant set of video clips 1330 all belong to a mutually similar special interest group; optionally, the system 1100 determines that a given video clip 1330 is likely to belong to a given special interest group from probabilities of association between the given clip and the special interest group based upon probabilities derived from a plurality of users 1260. Moreover, for example, the system 1100 is optionally configured so that a given user 1260 is able to select a given first video clip 1330A and to add thereto a second video clip 1330B to generate a third composite video clip 1330C; such combining optionally occurs in the device 1110, or alternatively in a proxy manner at the one or more databases 1210 via control from the device 1110 of the given user 1260. Combining the first and second video clips 1330A, 1330B together is logged by the system 1100 as being an increased probability that the first and second video clips 1330A, 1330B belong to video clips associated with a given special interest group represented by the given user 1260. Such combining includes, for example, downloading a musical video clip 1330A generated by the first user 1260, for example a first musician playing a backing track for a piece of music, and a second user 1260 recording in real time a second video clip 1330B whilst replaying the musical video clip 1330A, and then combining them to provide a third composite video clip 1330C which is uploaded to the one or more databases 1210 for other users 1260 subsequently to download, or control via proxy, to add their video clips thereto. Such an approach associated the users 1260 into mutually similar special interest groups, as well as identifying that the video clips 1330A, 1330B, 1330C are all likely to belong to a group of video clips pertaining to the special interest group including the first and second users 1260. Moreover, such determination of special interest groups occurs in the system 1100 by way of user operations when managing video clips 1330, without any assessment being necessary from the reviewing users 1270. Optionally, the users 1260 pay a fee to the system 1100 for being permitted to use the first video clip 1330A by adding their second video clip 1330B to generate the combined video clip 1330C, for example a royalty payment to the first user 1260. Optionally, such payment takes into account an extent, for example number of download and/or viewings, to which the composite video clip 1330C is accessed by other users 1260 of the system 1100.
The present invention is capable of bringing many benefits in comparison to contemporary aforesaid known platforms such as Facebook and YouTube. Advertisers who elect to advertise via the system 1100 will have much greater impact with their advertisement because including one or more video clips of the given user 1260 into advertisements presented to the given user 1260 will achieve more sympathy and attention form the given user 1260, and thus improved advertising impact. Similarly, video clips generated by other users in a similar special interest group to that of the given user 1260 are more likely to be of interest to the given user 1260, hence resulting in the advertisements being more sympathetically received by the given user 1260, hence causing the given user 1260 to be more positively inclined towards the products and/or services presented to the given user 1260 in the advertisements.
The system 1100 is also capable of functioning as a "sales market" for video clips. This enables each user of a device 1110 to become their own film director and receive benefit, for example financial benefit, if their videos receive a good recommendation and are subsequently employed by other users of the system 1100. This enables poorer people in poorer parts of the World to have an alternative source of income, as well as educating other parts of the World, for example the rich First World, about conditions in poorer parts of the World by way of short video clips, namely an aim promoted by UNESCO and other international relief agencies. Moreover, news gathering agencies are able to offer payment for video clips provided via the system 1100 from various parts of the World from where they are desirous to gather news, for example from natural disaster areas and from war afflicted areas.
An example use of the system 1100 is in a social event as depicted in FIG. 11 , for example a sports event. A sports event occurs in an arena 1400, for example in a football pitch, in a tennis pitch, in a boxing ring, or similar. The given users 1260 are present in a spectator or viewer area surrounding at least a part of a periphery of the arena 1400. During executing of the sports event, the given users 1260 are spectators and employ their mobile wireless devices 1 10 to take short video clips of the event. The short video clips are communicated by wireless to a group of reviewing users 1270 who are responsible for the event and who rapidly review substantially in real time the short video clips to determine which would be most suitable to display in substantially real time on a large display screen 1410 mounted adjacently to the arena 1400. The short video clips of given users 1260 that are presented on the large display screen 1410, for example a projection display screen, are rewarded by the group of reviewing users 1270 providing the given users 1260 with a form of bonus, for example free tickets to other sports events or financial payment. Such an arrangement is capable of increasing interest in sport events attended by the given users 1260 and also provides the reviewing users 1270 with a collectioa of short video clips which can be employed for generating video content for communication via international communication to other parts of the World for generating sports televising revenues. Although a sports event is described in the foregoing, the system 1100 is susceptible to being employed at music concert events, ceremonial events and so forth.
Optionally, the group of reviewing users 1270 is implemented, at least in part, automatically using computing machine-intelligence, for example by way of neural networks implemented in computing hardware and software, and/or by rule rule- based analysis, for example employing one or more stages of Fourier analysis of sound signals, and spatial Fourier analysis of images in video content. A classifying function performed by the reviewing users 1270 is optionally performed as described in the foregoing by way of patterns of user downloading of video clips 1330, viewing of video clips 1330, and by a manner in which video clips 1330 are combined to generate corresponding composite video clips 1330.
The system 1100 is also capable of extracting aggregate data from the flow of short video clips communicated via the system 1100, for example data pertaining to specialist interest groups. This aggregated data, for example trend analysis data, maintains the confidentiality of individual given users 1260, but provides useful insight into dissemination of information. For example, when users of the system 1100 are able to combine sound clips and video clips together to generate composite video content for dissemination, for example as aforementioned, where copyright owners of the sound clips have permitted such use of their sound clips, the system 1100 is capable by monitoring dissemination of the composite video content to determine the following:
(i) a popularity of a given sound clip, for example a song or instrumental music piece; and (ii) a video context in which users consider the given sound clip to be most suitable.
Such information is potentially useful to popular music bands that are desirous to release their latest music productions in a manner which will generate most sales and will also assist to increase an awareness and profile of the popular music bands. The system 1100 is able to offer such an investigative aggregation service to popular music bands and similar in return for payment.
It will be appreciated that the system 1100 is capable of providing novel features which render it more beneficial to users in comparison to contemporary known content distribution platforms 40 such as YpuTube and similar.
Although reference is made to "short video clips", for example video clips having a playing duration in a range of 1 to 10 seconds, more optionally substantially 3 seconds, it will be appreciated that the present invention is mutatis mutandis also relevant for the distribution of still pictures.
Referring to FIG. 12 and FIG. 13, the present invention will next be described in association with a wireless communication device indicated by 2010; the wireless communication device 2010 is optionally implemented as the aforementioned device 1110 or the aforementioned device 100. The device 2010 is, for example a compact contemporary smart phone. Moreover, the device 2010 includes computing hardware 2020 coupled to a data memory 2030, a touch-screen graphical user interface 2040, a wireless communication interface 2050, an optical pixel array sensor 2060, and a microphone 2070 and associated speaker 2080 for user oral communication. The wireless communication device 2010 is operable to communicate via a cellular wireless telephone network denoted by 2085. Moreover, the computing hardware 2020 and its associated data memory 2030 are of sufficient computing power to execute software applications, namely "Apps", downloaded to the wireless communication device from an external database 2090, for example from an "App store".
The wireless communication device 2010 includes an exterior casing 2100 which is compact and generally elongate in form, namely having a physical dimension to its spatial extent which is longer along an elongate axis 2110 than its other physical dimensions; for example, the exterior casing 2100 has a length L along the elongate axis 2110 which is greater than its width W, and also greater than its thickness T. Moreover, in such contemporary wireless communication devices, it is customary for the device 2010 to have substantially mutually parallel front and rear major planar surfaces 2120, 2130 respectively, wherein the touch-screen graphical user interface 2040 is implemented in respect of the front major planar surface 2120 and the optical pixel array sensor 2060 is implemented in respect of the rear major planar surface 2130. Such an implementation enables the device 2010 to be employed for oral dialogue, namely conversations, when the users are in a standing state and the elongate axis 2110 is orientated in a substantially vertical manner. However, it is contemporary design practice for the device 2010 to be rotated by 90° when the device 2010 is to be employed in its camera mode; in such a camera mode, a user holds the device 2040 at its elongate ends with both user hands, such that the elongate axis 2110 is substantially horizontal, and the optical pixel array sensor 2060, is orientated typically away from the user towards a scene of interest whilst the touch-screen graphical user interface 2040 presents to the user in real time a view as observed from the optical pixel array sensor 2060. The user then depresses a region of the touch-screen graphical user interface 2040 to capture an image as observed from the optical pixel array sensor 2060 and stores it as a corresponding still image data in the data memory 2030, for example in JPEG or similar contemporary coding format. Similarly, the user is alternatively able to depresses a region of the touchscreen graphical user interface 2040 to capture a sequence of video images as observed from the optical pixel array sensor 2060 and store it as corresponding video content data in the data memory 2030, for example in MPEG or similar contemporary coding format; the user depresses a region of the touch-screen graphical user interface 2040 to terminate capture of the sequence of video images. Subsequently, the user can elect to communicate via wireless the still image data and/or video content data to other users, or onto an external database, for example a "cloud" database residing in the Internet, for archival purposes or for further processing. Such further processing is performed, for example, using video editing software executable upon laptop or desktop personal computers (PCs) which are connectable to the Internet, for example to download data from the "cloud" database. It is found by the inventors of the present invention that such a manner of operation of the device 2010 is awkward and results in unprofessional video content generation. Moreover, the video content data can be potentially unwieldy in size which renders it costly and time consuming to communicate from the device 2010 onto one or more external databases. Moreover, subsequent editing the video content data can be time consuming, requiring a PC to be activated into operation, and video content editing software invoked. Although awkward to implement, many contemporary users are prepared to employ such a laborious process for generating video content for subsequent distribution through well-known video sharing platforms such as YouTube, Facebook, and similar social media sharing Internet sites; "YouTube" and "Facebook" are registered trade marks.
The inventors of the present invention have appreciated that a mobile wireless communication device, for example the aforementioned device 2010, can be adapted by executing a suitable software application upon its computing hardware 2020 to operate in a manner which is technically more user-convenient and generates video content data which is more manageable to edit using the device 2010 and more efficient in its use of wireless communication bandwidth when communicated to an external database.
An embodiment of the present invention will now be described as applied to the aforesaid device 2010, with reference to FIG. 14. When implementing the present invention, at least one novel software application 2200 is downloaded from the external database 2090, alternatively already preloaded onto the device 2010. The novel software application 2200 enables the device 2010 to be employed as a camera apparatus for capturing still images and sequences of video images when the device 2010 is orientated with its elongate axis 2110 in substantially a vertical orientation; in other words, the software application 2200 when executed upon the computing hardware 2020 coupled to a data memory 2030 enables the computing hardware 2020 to perform following operations:
(a) to receive a data stream 2210 from the optical pixel array sensor 2060 corresponding to a temporal sequence of n images 2220, denoted by p(i) to p(n), wherein n is an integer; (b) to select a sub-portion 2230 of each image 2220 in the sequence of images 2220 as illustrated in FIG. 15 to generate a corresponding sequence of sub- images 2240; and
(c) to process the sequence of sub-images 2240 to rotate their orientation as illustrated in FIG. 15 by a compensation angle which is a function f of the angle Θ, for example an angle of substantially 90°, to generate a sequence of corresponding rotation-corrected sub-images 2240.
The rotation-corrected sub-images 2240 are stored in the data memory 2030, optionally together with a copy of the temporal sequence of images 2220 being retained in the data memory 2030. Optionally, an accelerometer included within the device 2010 and coupled to the computing hardware 2020 provides an angular signal of an orientation of the elongate axis 2110, and the rotation applied in aforesaid step (c) is made a function of the angular signal so that the rotation-corrected sub-images 2240 always appear in an upright orientation, despite the user varying an orientation of the device 2010 when capturing the video content in the data stream 2210. Optionally, the rotation correction applied is substantially 90°, for example in a range of 65° to 115°. Optionally, the angular signal is stored in the data memory 2030 for future use, together with the sequence of images 2220. Optionally, a gyroscopic sensor 2245, for example a Silicon micromachined vibrating structure rotation sensor, is included in the device 2010 and coupled to the computing hardware 2020 for providing a real-time angular orientation signal 2250 indicative of an instantaneous angular orientation of the device 2010 about substantially a central point 2260 within the device 2010; the real-time angular orientation signal 2250 is employed by the computing hardware 2020 under direction of the software application 2200 to adjust a position within the images 2220 from where the sub- portions 2230 are extracted, as illustrated in FIG. 16, to provide a compensation for any temporally abrupt angular displacement of the device 2010 by its user when capturing video content. The software application 2200 is operable to enable the device 2010 to take short clips 2255 of video content, for example short clips of defined duration D, for example in a range of 1 second to 20 seconds duration, more optionally in a range of 1 second to 10 seconds duration, and yet more optionally substantially 3 seconds duration. Thus, within the device 2010, the data stream 2210 is processed as video clips as illustrated in FIG. 17, each clip having a duration D, wherein each video clip is a self-contained data unit; for example, when the video clip is encoded via MPEG, each video clip would have a commencing intra frame (l-frame) with subsequent predicted frames (P-frame) and/or bidirectional frames (B-frame). In an event of a rapid image change within a given video clip, one or more additional l-frames are optionally included later in the given video clip. However, the software application 2200 executing upon the device 2010 is optionally capable of supporting other types of image encoding alternatively, or in addition, to MPEG encoding, for example JPEG, JPEG2000, PNG, GIF, RLE, Huffman, DPCM and so forth.
The device 2010 is conveniently operated, when taking one or more aforesaid video clips, by the user holding the device 2010 in one of his/her hands, with the elongate axis 2110 in a substantially vertical direction, with the optical pixel array sensor 2060 pointing in a direction away from the user, with the touch-screen graphical user interface 2040 facing towards the user being provided, via execution of the aforesaid software application upon the computing hardware 2020, with an active area corresponding to a "record button", for example optionally shown as a region of the touch-screen graphical user interface 2040 presented in red colour. The device 2010 then optionally operates such that:
(a) depressing the record button for a short instance causes the device to capture one video clip of duration D, for example substantially 3 seconds duration; and (b) maintaining the record button depressed continuously causes the device 2010 to capture a temporally concatenated sequence of video clips of duration D, for example substantially 3 seconds duration, until the user ceases to depress the record button.
Such a manner of operation encourages the user to take short series of video clips of subject matter which is specifically of interest. Moreover, it also encourages the user to take short single clips of events. By enabling the device 2010 to capture images and/or video clips with the device 2010 in an orientation with its elongate axis in substantially vertical direction renders the user and the device 2010 in a posture for undertaking a customary telephone conversation; this enables the user to capture video content in an unobtrusive manner, whilst appearing to be undertaking a telephone conversation, thereby enabling scenes to be captured by the device 2010 in a less imposing and natural manner and potentially resulting in more interesting video content being generated.
The aforesaid video clips recorded in the data memory 2030 soon occupy considerable memory capacity therein, especially if the user elects to keep both data corresponding to the data stream 2210 as well as data corresponding to the rotation- corrected sub-images 2240. Optionally, initially, merely the data stream 2210 is recorded in the data memory 2030 together with rotational angle data pertaining to the device 2010 at a time when the one or more video clips and/or still images were captured, with the rotation-corrected sub-images 2240 being subsequently generated after capture of the data stream 2210. When the device 2010 is operated, when executing the software application 2200, to capture still images, the record button functions in a manner akin to a conventional camera shutter button, namely an image is capture at an instance the record button is depressed by the user.
The data volume associated with one or more video clips stored in the data memory 2030 can become considerable, such that it is desirable for the software application 2200 when executed upon the computing hardware 2020 to provide the user with an opportunity to review the video clips to decide which to retain and which to discard. In view of the touch-screen graphical user interface 2040 being of rather limited area relative to a screen area of a lap-top computer or desk-top computer, the software application 2200 is arranged to cause the computing hardware 2020 to function in a radically different manner in comparison to known contemporary video content manipulation software employed in lap-top computer or desk-top computers.
Referring to FIG. 17, the touch-screen graphical user interface 2040 is relatively small in area, for example 4 cm x 7 cm in spatial extent. People with poorer eyesight, for example more mature users, are often not able to distinguish fine detail on the touch-screen graphical user interface 2040, despite it being technically feasible to provide the interface 2040 with a high pixel resolution by microfabrication processes. However, the user interface 2040 is capable of supporting, in conjunction with the software application 2200, tapping and swiping motions of the user's fingers for instruction input purposes to the computing hardware 2020. Such tapping and swiping motions are clearly distinguished from click-and-drag motions customarily employed in conventional video editing software executable upon lap-top and desktop personal computers.
In FIG. 17, after capture of a series of video clips has been executed, for example individual temporally isolated video clips or temporally concatenated sequences of video clips, the software application 2200 executed upon the computing hardware 2020 provides an editing mode of operation which the user employs with the user interface 2040 facing towards the user. Optionally, an elongate dimension of the user interface 2040 is arranged to be top-bottom, and a transverse dimension of the user interface is arranged be left-right as observed by the user. On a left-hand portion 2300 of the user interface 2040, the software application 2200 executing upon the computing hardware 2020 is operable to present a sequence of captured video clips as miniature icons 2310 along an axis 2320 from top to bottom. The video clips shown by the icons 2310 are optionally presented in a temporal sequence in an order in which they were captured by the device 2010. The user is able to scroll up and down the icons by way of a finger swiping motion applied to the user interface 2040. Moreover, the user is able to view a given video clip by tapping upon a corresponding icon 2310 displayed along the axis 2320. On a right-hand portion 2350 of the user interface 2040, there are presented icons corresponding to a plurality of primary sorting "bins", for example a "trash bin" 2360, a "best video clip bin" 2370 and one or more "moderate interest bins" 2380, for example two moderate interest bins 380(1), 380(2). Beneficially the "best video clip bin" 2370 is spatially remote in the right-hand portion 2350 relative to the "trash bin" 2360 as illustrated in FIG. 17, with the one or more "moderate interest bins" 2380 interposed therebetween. When sorting the video clips represented by the icons 2310 along the axis 2320, the user positions his/her finger or thumb over a given icon 2310 of a given video clip to be sorted and then swipes the icons 2310 into a bin desired by the user. For example, video clips that are not to be retained in the data memory 2030 are selected for deletion by the user swiping icons 2310 corresponding to the video clips towards the "trash bin" 2360. Moreover, video clips that are definitely to be retained in the data memory 2030 are selected for keeping by the user swiping icons 2310 corresponding to the video clips towards the "best video clip bin" 2370. Furthermore, video clips that are to be retained, at least in a short term, in the data memory 2030 are selected for intermediate storage by the user swiping icons 310 corresponding to the video clips towards the one or more "moderate interest bins" 2380. The more "moderate interest bins" 2380 are optionally susceptible to be given names chosen, and thus meaningful to, the user. When the icons 2310 have been sorted along the axis 2210, the user can invoke, for example by a finger or thumb tapping action, an "empty trash bin" icon on the user interface 2040 to delete the video clips sorted by the user into the "trash bin" 2360, to free space in the data memory 2030 for receiving future video clips. Optionally, the software application 2200 provides a secondary sorting function, wherein the user invokes a given primary bin from the right-hand portion 2350 by a finger or thumb tapping action. Such an action causes the user interface 2040 to switch to a secondary mode, wherein the video clips within the given primary bin appear along the axis 2320 in the left-hand portion 2300 of the user interface 2040. Moreover, one or more secondary bins 2400, for example three secondary bins 2400(1), 2400(2), 2400(3), are presented in the right-hand portion 2350, enabling the user by way of a swiping action as aforementioned to sort the contents of the given primary bin into one or more of the presented secondary bins 2400 presented in the right-hand portion 2350. Tertiary and higher order sorting of video clips into tertiary and higher order bins via swiping actions executed by the given user on the user interface 2040 is optionally supported by the software application 2200 executing upon the computing hardware 2020 of the device 2010.
Referring next to FIG. 18, the software application 2200 executing upon the computing hardware 2020 is operable to enable the user to upload one or more video clips sorted into one or more bins to be retained by communicating data corresponding to the one or more video clips via wireless through the wireless communication interface 2050 to one or more remote proxy servers 2500. Optionally, the software application 2200 enables the device 2010 to be used to manipulate the uploaded data clips on the one or more remote proxy servers 2500, for example to assemble the uploaded video clips into a composite video creation to which additional sound effects, additional sound tracks, additional video effects can be added, prior to the composite video creation being broadcast via social media, for example via YouTube, Facebook or similar. When video editing of the uploaded one or more video clips is executed at the one or more proxy servers 2500, only relatively small data flows associated with user instructions are communicated via the device 2010 to the one or more proxy servers 2500. The user is optionally allowed to include into the composite video creation one or more video clips that the user has earlier uploaded to the one or more proxy servers 2500, as well as authorized third- party video clips and sound tracks, for example music tracks; these authorized third- party video clips and sound tracks, as well as the user's earlier uploaded video clips, are beneficially represented by thumb-nail icons on the user interface 2040, thereby avoiding a need to download complete data corresponding to the authorized third- party video clips and sound tracks, as well as the user's earlier uploaded video clips, to the device 2010.
Optionally, when uploading video clips from the device 2010 to the one or more proxy servers 2500, either the rotation-corrected sub-images 2240 or data corresponding to the data stream 2210 giving rise to the rotation-corrected sub-images 2240, or both, are uploaded; the user is optionally provided with an option which to choose depending upon whether or not there is a need for the user to revert back to the data corresponding to the original data stream 2210. Clearly, uploading the rotation- corrected sub-images 2240 only requires less transfer of data and is hence faster and/or less demanding in available wireless data communication capacity.
The invention is capable of providing numerous benefits to the user. The software application 2200 executing upon the computing hardware 2020 of the device 2010 is operable to capture data from one or more sensors in a more convenient manner, thereafter to provide the user with an environment in which to perform various processing operations on the captured data despite the device 2010 having a relatively smaller graphical user interface 2040, and then communicated resulting processed data to one or more proxy servers 2500 via wireless or direct wire and/or optical fibre communication connection. Such wire and/or optical fibre communication is beneficially achieved by way of the device 2010 communicating via near-field wireless communication to a communication node in close spatial proximity to the device 2010, where the communication node has a direct physical connection to a communication network, for example the Internet; such near-field wireless communication can, for example be performed, by the user, after capturing video clips, placing the device 2010 in close proximity with a lap-top computer or desk-top computer also equipped with near-field wireless communication, for example conforming to BlueTooth or similar communication standard. "BlueTooth" is a registered trade mark.
As aforementioned, the device 2010 is beneficially a contemporary wireless smart phone, for example a standard mass-produced item, which is adapted by executing the software application 2200 thereupon, to implement the present invention. The software application 2200 is beneficially implemented as an "App" which can be downloaded from an "App store", namely external database, else provided preloaded into the smart phone at its initial purchase by the user.
Optionally, when the system 1100 downloads one or more video clips for presentation to the device 1110, and/or the device 1110 replays one or more video clips 1330 stored in its data memory 1130, the given user 1260 is provided with an instantaneous player experience, namely commencing with a short duration highlight, for example implemented as a GIF style clip having a duration in a range of 4 to 10 seconds, whereafter the player experience progresses to full high definition (HD). It is thereby feasible, for example, to present initially a plurality of video clips 1330, for example two to three video clips 1330, before reverting to the aforesaid high definition (HD), so that the user 1260 is provided with an effective oversight of choice of video clips 1330. For example, such an experience is capable of being provided by sending a JPEG image wrapped as a Quicktime movie video file; "Quicktime" is a registered trademark.
Optionally, when the devices 100, 1110, 2010, for example implemented as one device, are being employed to capture, namely shoot, a series of images, namely pictures, namely in a "burst mode", the devices 100, 1110, 2010 optionally are operable to employ high resolution images as video key frames, and the devices 100, 1110, 2010 are then operable to add lower quality reference frames thereto, for example for building up a form of image movie sequence. Optionally, when the devices 100, 1110, 2010 are being used by their respective users 1260 to watch a video clip 1330, a following sequence of user actions are beneficially performed in the devices 100, 1110, 2010:
(i) a given user 1260 watches a given video clip 1330,
(ii) the given user 1260 taps upon a presentation screen of the devices 100, 1110, 2010, for example taps the display 2040;
(iii) the software 2200 executing upon the devices 100, 110, 20 0 is operable to employ an analysis algorithm to measure how fast the given user 1260 reacts, namely the software 2200 determines user-reaction-times, based upon one or more of: user iteration in respect of a speed editor, in respect of a sound graph on a portion of video clip 1330, in respect of a scene change or other activity on a video clip 1330 being watched by the given user 1260;
(iv) the software 2200 executing upon the devices 100, 1110, 2010 is operable to interpret single taps applied by the given user 1260 as points of interest for creating a video edit when editing the given video clip 1330; and
(v) the software 2200 executing upon the devices 100, 1110, 2010 is optionally operable to suggest areas of interest in the video clip 1330 that the given user can jump to during editing. When employing the devices 100, 1110, 2010, the software 2200 is optionally operable to provide:
(a) a colour grading mechanism;
(b) a mode for creating colour grade stripes; and
(c) a colour stripe edit tool.
When implementing the colour grading mechanism, it is desirable that information that is used by for the devices 100, 1110, 2010 to grade images, namely pictures, according to their colour is beneficially stored in a single encrypted uncompressed image. Moreover, when implementing such a colour grading mechanism, when a given user 1260 selects a colour grade to apply to his/her images, the software 2200 causes the devices 100, 1110, 2010 to read a stripe of the images for determining information required by the software needed to apply a transformation to the images; in other words, a stripe-based colour assessment of images is employed in the system 1100 in conjunction with the devices 100, 1110, 2010. When implementing the mode for creating colour grade stripes, a given image is used as a basis for the software 2200, for example in conjunction with the system 1100, to create a colour grade stripe. Thereafter, the given image is scanned to extract information therefrom indicative of all colour present in the given image. Next, the colours present are divided evenly to indicate a full range of different colour and shades present in the image. Thereafter, from this information, a stripe if created by the software 2200 for use in colourizing other video clips 1330. Such a mode is beneficial, for example, when editing or creating other video clips 1330, for example for maintaining a colour scheme throughout a duration of one or more video clips 1330. Moreover, such a mode is beneficial when creating vintage video clips 1330, wherein a theme of vintage greyscale images is to be utilized throughout the video clips 1330.
The colour stripe editing tool is beneficially provided by the software 1100 and/or the system 1100, wherein the colour strip tool can be used to create new colour grading stripes for use when processing images. Optionally, a video clip 1330 can be added, whereafter the system 1100 executes a short analysis and thereafter makes recommendations to the given user 1260 regarding which types of colour grades would be most suitable to a given vide clip 1330.
When a given user 1260 uploads a long video to the system 1100, for example at least an order of magnitude longer in duration when played than video clips 1330 described in the foregoing, the system 1100 beneficially analyzes the uploaded long video for determining at least one: audio variation in the long video, colour variation in the long video, brightness variation in the long video, scene change timing in the long video, movement information present in the long video (for example, by vector analysis in a compressed H.264 video format domain), facial information in the long video, geographic metadata present in the long video, timing metadata in the long video. From such determination of information, algorithms executing in the system 1100 and/or on the device devices 100, 1110, 2010 is operable to split, for example, l-frames present in the long video to create segments of interesting video, namely subdivides the long video into a plurality of smaller video clips 1330, that the given user 1260 is able to manipulate and incorporate into their video compilations. Optionally, the segments of interesting video are recompressed and made to conform to a standardized mezzanine format within the system 1100. Moreover, optionally, bytes are extracted and copied from the metadata without compression, thereby enabling searching for segments of video to be executed within the system 1100 based upon metadata searching.
In an example embodiment of the present invention, there is provided a method of creating video templates for use when automatically editing a given user's 1260 video clips 1330 in a style of their favourite movie, short film, music video or similar. In order to implement such a method, information regarding the structure and features of the favourite movie, short film, music video or similar are determined, and then the information is applied to the given user's 1260 video clips, namely in a manner as will now be described.
For implementing the method, the system 1100 includes a video indexing system that is operable to analyze a video, for example the given user's 1260 favourite video, and to create a list of scene changes and video cuts based on:
(i) analyzing movement changes in the video;
(ii) analyzing colour differences in the video;
(iii) analyzing luminance changes in the video;
(iv) analyzing chrominance changes in the video;
(v) analyzing encoded video vector information present in the video, for example H.264 vector information;
(vi) analyzing changes in audio tracks in the video; and
(vii) extracting audio track information and thereafter using one or more external databases to the system 1100 to ascertain a beat, tempo and/or musical phrase structure of the audio track.
Optionally, the video indexing system is operable to scan locally available videos of the system 1100 and perform thereon, in respect of time, a domain-based perceptual hashing and analysis of the video as aforementioned. Moreover, optionally, the video indexing system is operable to play a video back from the Internet (World Wide Web: www) through a standard contemporary web browser interface and then analyze the video in real time while identifying associations with the one or more external databases. Such a video indexing system functionality for the system 1100 provides the given user 1260 with rapid access to video content and audio content which, for example, can be appropriately compiled together with the given user's 1260 video clips 1330 for generate a composite video composition, for example as aforementioned in respect of video clips 1330A, 1330B to generate the video clip 1330C, as illustrated in FIG. 10.
Optionally, the video indexing system employs an application program interface (API) which allows third parties to build a video scanning system for their own websites and/or to execute on their local computing system to assist to build a list of video cut styles for popular movies and TV programmes; such information is then beneficially employed by the given user 1260 when manipulating his/her own video clips to generate longer duration video content in a style of well known video films and such like.
The system 1100 is thus beneficially capable of providing the given user 1260 with a method of using a film database, for example as established by the aforesaid video indexing system, to recreate a video cut style by implementing following steps:
(a) user-choosing popular audio tracks on their local device 100, 1110, 2010;
(b) using the video indexing system to scan one or more remote servers to request metadata which has been analyzed to find one or more matching music videos;
(c) if a video in step (b) is not found, proceeding to search the Internet to find a matching music video and thereafter scheduling an analysis pass as aforemented;
(d) optionally providing a quick analysis of one or more audio tracks of the one or more matching music videos, for example to extract beat and tempo information, and then create a cut styles based upon such information;
(e) optionally using a name of the one or more matching music tracks, and their associated metadata, to search via the API for tempo, beat, and/or phrase data present in an existing database, for example proprietary Echonest, "Echonesf is a trademark; and
(f) optionally allowing the given user 1260 to choose a predefined edit style, for example using information from (e), that instantly provides the given user 1260 with best results when performing video editing. In an example embodiment of the present invention, a TV programme video template style is employed by the given user 1260; an associated method includes steps of:
(a) user-performing a search to find a given television programme that they like, for example Miami Vice or LA Ink (celebrity Kat Von D);
(b) using the system 1100 to scan one or more remote servers to request metadata derived from analysis of the TV programme to find matching TV programmes or similar genres of programmes, for example a Miami Vice search returns an CSI template as an associated suggested cut style;
(c) in an event that a TV programme has not been analyzed and categorized for a benefit of the system 1100, performing an Internet search for a matching TV programme which is then subject to an analysis pass as aforementioned;
(d) for providing the given user with a good editing experience in an absence of precise TV programme analysis data, using the system 1100 to allow the given user 1260 to choose a predefined edit style that instantly, for example, provides best editing results, or using the system 1100 to allow the given user 1260 to choose a predefined TV programme genre and editing style interface, for example editing interfaces such as Crime, 1930's, colour grade -aged film dust and scratches.
In an embodiment of the invention adapted for editing pursuant to a blockbuster movie, the system 1100 employs a method including:
(i) user-downloading a software application to his/her local computer, for example to the device 100, 1110, 2010;
(ii) user-choosing a movie that the given user 1260 likes and is available locally to the given user 1260;
(iii) using the system 1100 to request basic metadata pertaining to the movie, and then using one or more third party databases to auto-fill any missing data to enhance the basic metadata;
(iv) optionally using the system 1100 to scan one or more remote servers to request film cut metadata that potentially already exists;
(v) optionally, in an event that a TV programme is identified that the system 1100 has not analyzed, using the system 1100 to store associated video cut data both locally, for example in the device 100, 1110, 2010, and to one or more remote servers; (vi) optionally, for example in an event that the given user 1260 does not wish to wait for the system 1100 to execute an analysis pass, allowing the user 1260 to choose a predefined edit style that instantly gives his/her best results, and allowing the given user 1260 to define a cut style by choosing a predefined film genre and associated style interface, for example gladiator/action, antiquity, colour grade - Technicolor; "Technicolor" is a trademark.
In the embodiment, once a cut style has been created or downloaded to the system
1100 from one or more external servers, the system 1100 beneficially guides the given user 1260 in one or more ways as follows:
(a) in a simple manner, the given user 1260 is guided and orders one or more videos that he/she wants to choose, and the system 1100 cuts then to a tempo of a target input film and/or input music;
(b) in an advanced manner, the given user 1260 is instructed, via a simple mechanism, to sort the videos according to cuts that are needed, namely in a manual or semi-manual manner. Optionally, the system 1100 provides the given user 1260 with a simple storyboarding tool that presents to the given user 1260 what types of shots, for example video clips 1330, are needed to recreate a film. In an example embodiment of the present invention, there is provided a method of selecting video content, editing video content, sorting videos and asynchronously updating associated metadata accessible to the system 1100 over low-bandwidth communication connections. The system 1100 is thus operable to provide an editing platform, for example as aforementioned. The system 1100 is operable to allow the given user 1260 to add metadata to a local or remote video via an associated still picture image.
In practice, video files are large, for example several hundred Mbytes or even several Gbytes in size, and are thus difficult to sort via low bandwidth communication networks, for example linking the device 100, 1110, 2010 to its associated server arrangement of the system 1100 as aforementioned. Moreover, it is found conventionally in practice that it is a relatively complex task to add metadata to video, often conventionally requiring specialist tools. In contradistinction, the example embodiment provides a simple method for the given user 1260 to sort, edit, order, request, output and add metadata to video files using only still picture frames, wherein such still picture frames are easier to handle via low-bandwidth communication networks. In the method of selecting video content, using the system 1100 as a FrameBlast platform to create videos, single frames of video content are outputted at specific times from a beginning of a video that contains metadata; such metadata includes, for example, name, type, format, date, hash sum,. UUID, location, geo-location and so forth. Optionally, first frames are beneficially outputted and saved, and thereafter video images are extracted at predefined intervals, for example predefined regular intervals; there is thereby generated an image file including of still images.
The method then progresses to use the image file for bi-direction metadata transport, wherein metadata is beneficially embedded in respect of the still images using software tools such as EXIF, XMP and IPTC; "ΕΧΙΓ, "XMP' and "/PTC" are trademarks. Optionally, there are three forms of metadata that thereby co-exist embedded into the data file including the still imagers, in one or more suitable formats. The three forms of metadata optionally include:
(i) Persistent metadata: such persistent metadata optionally includes technical metadata concerning where and when the video was shot, format of the video, resolution of the video, UUID of the video, and so forth;
(ii) User/owner metadata: such user/owner metadata optionally includes any metadata that has been added to the video to enhance its discovery when searches are implemented and when categorization activities are undertaken within the system 1100. Optionally, the user/owner metadata includes metadata indicative of how the video, for example a video clip thereof, has been used and by whom; and
(iii) Social/collaboration metadata: such social/collaboration metadata optionally includes data added to video files by the given user 1260, and/or other users, in a form of comments of feedback that relate to subject matter of the video.
Use cases for the method, namely type 1 , type 2 and type 3 are elucidated in Table 1 : Table 1 : Use cases for the method
Use case Detail of steps
Types 1 : for automatically A series of image files, for example thumbnail images, creating a server-side that have been created from a single piece of video are request for a segment of requested via a simple web or mobile web browser a video, wherein the interface. The given user 1260 sorts the series of image segment is defined by files in a simple image organisation software application two points in time in a such as Google Picasa; "Google" and "Picasa" are timeline of the video. trademarks.
A server side request is optionally made for a stream of video to be downloaded to be created by e-mailing, for example, two thumbnail images to a secret user-specific e-mail address; for example, for a minute-long-duration video, seven thumbnails, for example at 10 second intervals, are created, wherein the first thumbnail is a first frame of the video and a last thumbnail is a last image of the video. In an event that there are seven videos in all, and the given user 1260 selects a thumbnail three, namely starting at 3 x 10 seconds in the example, and a thumbnail five, namely starting at 5 x 10 seconds in the example, a server supporting the system 1100 will return a segment of video having a duration of 20 seconds with a start point corresponding to a time 00:00:20.000 and an end point corresponding to a time 00:00:50.000.
Type 2: for social As part of a campaign, for example, thumbnails, namely feedback related to a single frame of a video, are automatically posted to one specific frame of a video. or more social media web-sites.
A tracking system would know locations of image files and track corresponding user feedback, for example likes and comments. Data corresponding to such user feedback is beneficially appended a source image file for later user, for example a complementary metadata to assist searching operations executed within the system 1100.
Type 3: for sorting and A given user 1260 employs a simple image ordering via use of management tool to sort and order thumbnail images, thumbnail images. for example using one or more of the devices 100, 1110,
2010.
Selections made by the given user 1260 are then uploaded to a FrameBlast backend of the system 1100 which, in return, creates a simple edit project that can be opened on-line at a later time. Another example embodiment of the invention concerns the system 1100 employing a method of storing metadata needed to recreate en edited video from any device, for example the devices 100, 1110, 2010, using metadata relating to source material used, wherein the metadata is embedded invisibly into media files such as still images, video and even audio files. Moreover, the method is also optionally able to embed an EDL as a sonic fingerprint that is part of an audio track.
In an example situation, the given user 1260 edits source videos to combine them together to generate a composite video, wherein the source videos are optionally recorded via use of a mobile device, streamed from one or more external servers, and/or are downloaded from one or more external servers; in FIG. 10 combining two video clips 1330A, 1330B to generate a composite video clip 1330C is shown as an example. A list of source videos used to create the composite video, namely output, is commonly known as an Edit Decision List, or "EDL" as aforementioned. The EDL is used by a video rendering engine of the system 1100 to create the aforesaid composite video.
The method includes:
(i) using a mobile video editing system, for example a proprietary iOS-based and Android-based system hosted by the system 1100, to take a selection of videos, to select portions of the these videos to order multiple videos into a seamless playlist, namely an EDL, to previewing the multiple videos as if it were one seamless file, and to render therefrom a new video file;
(ii) storing the seamless playlist form (i) on a standard editing system, for example in a project file;
(iii) storing the seamless playlist from (i) into a custom "atom" in a header of the composite file, namely output file, and/or embedded as metadata into a thumbnail representing a scene or multiple scenes from the composite video and/or a sonic fingerprint interleaved into the audio track at a frequency range which is not readily audible to the human ear;
(iv) subsequently reconstructing the composite video, for example in the system
1100, via the given user 1260 not needing to know any specific project file, but rather only access to embedded metadata. When the system 1100 is given a composite video file, it thus first scans a video container for the custom "atom" of the composite video file, reads therefrom the playlist, namely EDL, and then searches locally within the system 1100 or remotely in respect of the system 1100 for corresponding source video and metadata files, and thereafter opens these in a video editor for remixing purposes.
Optionally, in the method, the source video files are provided with corresponding UUID which is embedded into headers of video containers of the source video files to enable them to be found, even if video file names of the source video files have, for example been changed by the given user 1260. Optionally, a hashing system is employed when implementing the method, for example a perceptive- or data-based hashing system, is employed for quickly narrowing down searches for videos stored locally and/or remotely in respect of the system 1100. Optionally, each EDL contains a local source filename for a given video together with a shortened URL identifier that allows the video to be tracked and found at multiple locations on the Internet (www) using a DNS-type dynamic tracking system; such a feature is useful for detecting unauthorised use of video in violation, for example, of copyright and/or licensing arrangement for generating revenues for proprietors of the video. In an event that a given video does not include a custom "atom", the system is optionally operable to check for still images associated with the video and reads corresponding EXIF, XMP or ITCP metadata for the presence of the EDL; if such detection fails, the system is optionally operable to check audio tracks of the video for high frequency audio content which has the EDF encoded into it.
For further explanation, EXIF is elucidated in more detail in
http;//en.wikipedia.orq/wiki/Exchanaeable image file format.
which is hereby incorporated by reference.
Moreover, for further explanation, XMP is elucidated in more detail in
http://en.wikipedia.org/wiki/Extensible Metadata Platform,
which is hereby incorporated by reference.
Furthermore, ITCP is elucidated in more detail in
http://en.wikipedia.org/wiki/IPTC Information Interchange Model,
which is hereby incorporated by reference.
Additional, further information pertaining to digital container formats, for example in associated with aforementioned containers for video, is elucidated in more detail in http://en.wikipedia.org/wiki/Digital container format. which is hereby incorporated by reference.
Optionally, in an event that the system fails to find the source video files, the system is operable to segment the composite video file into its component video parts and load them for re-editing; however, transforms applied when generating the composite video often modify or strip data original present in the component video parts.
In the foregoing, video content is beneficially temporally auto-stretched to fit specific edits desired by the given user 1260. Moreover, the system 1100 beneficially hosts a fraud protection algorithm that enables content usage tracking to be achieved, which beneficially synergistically is also employed to predict what kind of content is preferred by the given user 1260; the fraud protection algorithm is operable to infer what the given user 1260 may like by correlating to what other users of the system 1100 like that have downloaded similar videos, for example where similarity of determined, at least in part, by metadata correlation; the fraud protection system is operable to employ use tags, channels, algorithm types and effects usage as input for use in tracking dissemination of video files; the fraud protection algorithm is operable to employ Eigenvector analysis to find principal routes by which a given video file is disseminated and used, namely by analyzing for principal links.
The system 1100 is optionally also operable to provide real-time rotoscoping. Such rotoscoping involves server-side conversion of videos for use in application green screening. Moreover, such rotoscoping optionally involves mixing music band video with local video, for example in a manner as aforementioned, generating auto green screens. Furthermore, such rotoscoping optionally involves manual rotoscoping of multiple clips with a music background which is common thereto; optionally, such manual rotoscoping is outsourced in respect of operation of the system 1100.
The system 1100 is optionally also operable to provide an audio re-synchronizing functionality, for example as described in the foregoing, for example for audio "re- syncing" of multiple clips with background music which is common thereto. Such audio "re-syncing" is beneficially when the given user 1260 is desirous to create composite performances, for example in a manner to vintage sound-on-sound overlay achieved using magnetic tape audio recorders ("reel-to-reel tape recorders"). Beneficially, for example for tracking fraud and/or copyright infringement, the system 1100 is optionally operable to employ at least one proxy server to record all user inputs and outputs, and this video and/or audio file manipulation and subsequent dissemination from the system 1100, for example to other users.
Modifications to embodiments of the invention described in the foregoing are possible without departing from the scope of the invention as defined by the accompanying claims. Expressions such as "including", "comprising", "incorporating", "consisting of, "have", "is" used to describe and claim the present invention are intended to be construed in a non-exdusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural. Numerals included within parentheses in the accompanying claims are intended to assist understanding of the claims and should not be construed in any way to limit subject matter claimed by these claims.

Claims

1. A video clip editing system (100, 160, 200) employing a mobile telephone (100) including computing hardware (110) coupled to data memory (120), to a touch- screen graphical user interface (130), and to a wireless communication interface (140), wherein the computing hardware (110) is operable to execute one or more software applications (200) stored in the data memory (120), characterized in that the one or more software applications (200) are operable when executed on the computing hardware (110) to provide an editing environment on the touch-screen graphical user interface (130) for editing video clips (410, 510) by user swiping-type instructions entered at the touch-screen graphical user interface (130) to generate a composite video creation, wherein a timeline (400) for icons (410) representative of video clips (410) is presented as a scrollable line feature on the touch-screen graphical user interface (130), and icons (510) of one or more video clips (510) for inclusion into the timeline (400) are presented adjacent to the timeline (400) on the touch-screen graphical user interface (130), such that video clips corresponding to the icons (510) are incorporated onto the timeline (400) by said user employing swiping-type instructions entered at the touch-screen graphical user interface (130) for generating the composite video creation.
2. A video clip editing system (100, 160, 200) as claimed in claim 1 , characterized in that the mobile telephone (100) is operable to be coupled in communication with one or more external databases (160) via the wireless communication interface (140), and manipulation of video clips represented by the icons (410, 510) is executed, at least in part, by proxy control directed by the user from the touch-screen graphical user interface (130).
3. A video clip editing system (100, 160, 200) as claimed in claim 1 or 2, characterized in that the one or more software applications (200) when executed upon the computing hardware (110) enables one or more sound tracks to be added to one or more video clips, wherein a duration adjustment of the one or more sound tracks and/or one or more video clips is executed automatically by the one or more software applications (200).
4. A video clip editing system (100, 160, 200) as claimed in claim 3, characterized in that the one or more sound tracks are adjusted in duration without causing a corresponding shift of pitch of tones present in the sound tracks.
5. A video clip editing system (100, 160, 200) as claimed in claim 3, characterized in that the one or more software applications (200) executing upon the computing hardware (110) are operable to cause the one or more video clips to be adjusted in duration by adding and/or subtracting one or more image frames from the one or more video clips.
6. A video clip editing system (100, 160, 200) as claimed in claim 5, characterized in that the one or more software applications (200) executing upon the computing hardware (110) synthesize a new header or start frame (830) of a video clip when a beginning part of the video clip is subtracting during editing.
7. A video clip editing system (100, 160, 200) as claimed in any one of the preceding claims, characterized in that one or more software applications (200) executing upon the computing hardware (110) are operable to provide a selection of one or more video clips (510) for inclusion into the timeline (400) presented adjacent to the timeline (400) on the touch-screen graphical user interface (130), wherein the selection is based upon at least one of:
(a) temporally mutually substantially similar temporal capture time of the video clips;
(b) mutually similar subject matter content determined by analysis of the video clips or of corresponding metadata; and
(c) mutually similar geographic location at which the video clips were captured.
8. A method of editing video clips by employing a mobile telephone (100) including computing hardware (110) coupled to data memory (120), to a touchscreen graphical user interface (130), and to a wireless communication interface (140), wherein the computing hardware (110) is operable to execute one or more software applications (200) stored in the data memory (120), characterized in that said method includes: (a) executing the one or more software applications (200) on the computing hardware (110) for providing an editing environment on the touch-screen graphical user interface (130) for editing video clips (410, 510) by user swiping-type instructions entered at the touch-screen graphical user interface (130) to generate a composite video creation;
(b) generating a timeline (400) for icons (410) representative of video clips (410) as a scrollable line feature on the touch-screen graphical user interface (130);
(c) generating icons (510) of one or more video clips (510) for inclusion into the timeline (400) adjacent to the timeline (400) on the touch-screen graphical user interface (130); and
(d) incorporating video clips corresponding to the icons (510) onto the timeline (400) by said user employing swiping-type instructions entered at the touchscreen graphical user interface (130) for generating the composite video creation.
9. A method as claimed in claim 8, characterized in that the method further includes operating the mobile telephone (100) to be coupled in communication with one or more external databases (160) via the wireless communication interface (140), and manipulating video clips represented by the icons (410, 510), at least in part, by proxy control directed by the user from the touch-screen graphical user interface (130).
10. A method as claimed in claim 8 or 9, characterized in that the method includes enabling, by way of the one or more software applications (200) executing upon the computing hardware (110), one or more sound tracks to be added to one or more video clips, wherein a duration adjustment of the one or more sound tracks and/or one or more video clips is executed automatically by the one or more software applications (200).
1 1. A method as claimed in claim 10, characterized in that the method includes adjusting a duration of the one or more sound tracks without causing a corresponding shift of pitch of tones present in the sound tracks.
12. A method as claimed in claim 10, characterized in that the method includes executing the one or more software applications (200) upon the computing hardware (110) to cause the one or more video clips to be adjusted in duration by adding and/or subtracting one or more image frames from the one or more video clips.
13. A method as claimed in claim 12, characterized in that the method includes executing the one or more software applications (200) upon the computing hardware (110) to synthesize a new header or start frame (830) of a video clip when a beginning part of the video clip is subtracting during editing.
14. A method as claimed in any one of claims 8 to 13, characterized in that the method includes executing the one or more software applications (200) upon the computing hardware (110) to provide a selection of one or more video clips (510) for inclusion into the timeline (400) presented adjacent to the timeline (400) on the touch-screen graphical user interface (130), wherein the selection is based upon at least one of:
(a) temporally mutually substantially similar temporal capture time of the video clips;
(b) mutually similar subject matter content determined by analysis of the video clips or of corresponding metadata; and
(c) mutually similar geographic location at which the video clips were captured.
15. A software application (200) stored in machine-readable data storage media, characterized in that the software applications (200) is executable upon computing hardware ( 10) for implementing a method as claimed in any one of claims 8 to 14.
16. A software application (200) as claimed in claim 15, characterized in that the software application (200) is downloadable as a software application from an external database (160) to a mobile telephone for implementing the method.
17. A media distribution system (1100) including one or more databases (1210) coupled via a communication network (1200) to users (1260), wherein the system (1100) provides for a subset of the users (1260) to upload video content (1250) for distribution via the system (1100) to other users (1260), characterized in that: (a) the system (1100) includes a reviewing arrangement (1270) for receiving the uploaded video content (1250) provided to the system (1100) and for generating corresponding recommendations (1280) which determine an extent to which the video content (1250) is disseminated through the system (1100) to other users (1260); and
(b) the system (1100) includes a reward arrangement for rewarding users who have uploaded the video content (1250) to the system (1100) as a function of the recommendations (1280) and an extent of distribution of the video content (1250).
18. A media distribution system (1 100) as claimed in claim 17, characterized in that the System (1100) is operable to add identification metadata to containers and/or atoms of video content (1250) for enabling detection of potential copyright infringement and/or unauthorized video file distribution.
19. A media distribution system (1100) as claimed in claim 17, characterized in that the reviewing arrangement (1270) includes users (1260) who belong to at least one special interest group accommodated by the system (1100), and the system (1100) includes an arrangement for directing the uploaded video content (1250) to the at least special interest group based on subject matter included in the uploaded video content (1250).
20. A media distribution system (1100) as claimed in claim 17, 18 or 19, characterized in that the system (1 100) is operable to generate advertisement content for presenting to a given user (1260), wherein the advertisement content comprises an advertiser's video content combined with video content provided by the given user (1260), or by at least one special interest group to which the given user (1260) belongs, wherein the advertiser's video content includes at least one video template into which the video content provided by the given user (1260), or by at least one special interest group to which the given user (1260) belongs, is inserted, thereby personalizing the advertisement content to the given user (1260).
21. A media distribution system (1 100) as claimed in any one of claims 17 to 20, characterized in that the system (1100) includes an arrangement for monitoring a dissemination of the video content (1250) to users (1260) and aggregating distribution results for generating distribution analyses.
22. A media distribution system ( 100) as claimed in claim 21 , characterized in that the arrangement of monitoring the dissemination is operable to monitor dissemination of composite music clips and video clips and providing analysis data indicative of association of the music clips with video clips elected by users (1260) of the system (1100).
23. A method of operating a media distribution system (1 100) including one or more databases (1210) coupled via a communication network (1200) to users (1260), wherein the system (1100) provides for a subset of the users (1260) to upload video content (1250) for distribution via the system (1100) to other users (1260), characterized in that the method includes:
(a) using a reviewing arrangement (1270) of the system (1100) for receiving the uploaded video content (1250) provided to the system (100) and for generating corresponding recommendations (1280) which determine an extent to which the video content (1250) is disseminated through the system (100) to other users (1260); and
(b) using a reward arrangement of the system (1 100) for rewarding users who have uploaded the video content (1250) to the system (1 100) as a function of the recommendations (1280) and an extent of distribution of the video content (1250).
24. A method as claimed in claim 23, characterized in that the method includes arranging for the reviewing arrangement (1270) to include users (1260) who belong to at least one special interest group accommodated by the system (1100), and arranging for the system (1100) to include an arrangement for directing the uploaded video content (1250) to the at least special interest group based on subject matter included in the uploaded video content (1250).
25. A method as claimed in claim 23 or 24, characterized in that the method includes arranging for the system (1100) to generate advertisement content for presenting to a given user (1260), wherein the advertisement content comprises an advertiser's video content combined with video content provided by the given user (260), or by at least one special interest group to which the given user (1260) belongs, wherein the advertiser's video content includes at least one video template into which the video content provided by the given user (1260), or by at least one special interest group to which the given user (1260) belongs, is inserted, thereby personalizing the advertisement content to the given user (1260).
26. A method as claimed in any one of claims 23 to 25, characterized in that the method includes arranging for the system (1 100) to include an arrangement for monitoring a dissemination of the video content (1250) to users (1260) and aggregating distribution results for generating distribution analyses.
27. A method as claimed in claim 26, characterized in that the arrangement of monitoring the dissemination is operable to monitor dissemination of composite music clips and video clips and providing analysis data indicative of association of the music clips with video clips elected by users (1260) of the system (1 100).
28. A software product recorded on machine-readable data storage media, characterized in that the software product is executable upon computing hardware (1120, 1210) for executing a method as claimed in any one of claims 23 to 27.
29. A camera apparatus including a wireless communication device (2010) incorporating computing hardware (2020) coupled to a data memory (2030), to a wireless communication interface (2050) for communicating data from and to the wireless communication device (2010), to a graphical user interface (2040) for receiving user input, and to an optical imaging sensor (2060) for receiving captured image data therefrom, wherein the computing hardware (2010) is operable to execute one or more software applications (2200) for enabling the optical imaging sensor (2060) to capture one or more images, and for storing corresponding image data in the data memory (2030) and/or for communicating the corresponding image data from the wireless communication device (2010) via its wireless communication interface (2050), wherein the wireless communication device (2010) has an elongate external enclosure having a longest dimension (L) defining a direction of a corresponding elongate axis (110) for the wireless communication device (2010), characterized in that
(a) the one or more software applications (2200) are operable to enable the wireless communication device (2010) to capture images when the wireless communication device (2010) is operated by its user such that the elongate axis (21 10) is orientated in substantially an upward direction, wherein the one or more software applications (2200) are operable to cause the computing hardware (2020) to select sub-portions of captured images provided from the optical imaging sensor (2060) and to generate corresponding rotated versions of the selected sub-portions to generate image data for storing in the data memory (2030) and/or for communicating via the wireless communication interface (2050); and
(b) the one or more software applications (2200) are operable to enable the wireless communication device (2010) to capture the one or more images as one or more video clips (2255) in response to the user providing tactile input at an active region the graphical user interface (2040), wherein each video clip (2255) is of short duration (D) and is a self-contained temporal sequence of images,
30. A camera apparatus as claimed in claim 29, characterized in that the short duration (D) is in a range of 1 second to 20 seconds, more preferable in a range of 1 second to 10 seconds, and most preferable substantially 3 seconds.
31. A camera apparatus as claimed in claim 29 or 30, characterized in that the wireless communication device (2010) includes a sensor arrangement (2245) for sensing an angular orientation of the elongate axis (2110) of the wireless communication device (2010) and for generating a corresponding angle indicative signal (2250), and the one or more software applications (2200) are operable to cause the computing hardware (2020) to receive the angle indicative signal (2250) and to rotate the sub-portions (2230) of the captured images (2220) so that they appear when viewed to be upright and stable images (2240).
32. A camera apparatus as claimed in any one of claims 29 to 31 , characterized in that the one or more software applications (2200) are operable when executed upon the computing hardware (2020) to present one or more icons (2310) representative of video clips upon the graphical user interface (2040), and one or more icons (2360, 2370, 2380) representative of sorting bins into which the one or more icons (2310) representative of video clips are susceptible to being sorted, wherein sorting of the one or more icons (2310) representative of video clips into the one or more icons (2360, 2370, 2380) representative of sorting bins in invoked by a user swiping motion executed by a thumb or finger of the user on the user graphical interface (2040), wherein a given icon representative (2310) of a corresponding video clip is defined at a beginning of the swiping motion and a destination sorting bin (2370, 2380, 2390) for the selected icon (2310) representative of a corresponding video clip is defined at an end of the swiping motion.
33. A camera apparatus as claimed in claim 32, characterized in that the one or more software applications (2200) executing upon the computing hardware (2020) are operable to cause the one or more icons (2310) representative of video clips upon the graphical user interface (2040) to be sorted to be presented in a scrollable array along a longest length dimension (2320) of the graphical user interface (2040).
34. A camera apparatus as claimed in claim 33, characterized in that the one or more software applications (2200) executing upon the computing hardware (2020) are operable to cause the one or more icons (2310) representative of video clips upon the graphical user interface (2040) to be sorted to be presented in a spatial arrangement indicative of a time at which the video clips were captured by the optical imaging sensor (2060).
35. A camera apparatus as claimed in claim 32, characterized in that at least one of the one or more icons (2360, 2370, 2380) representative of sorting bins, into which the one or more icons (2310) representative of video clips are susceptible to being sorted, is a trash bin (2360), wherein the computing hardware (2020) is operable is present the user with a graphical representation option for emptying the trash bin (2360) to cause data stored in the data memory (2030) corresponding to contents of the trash bin (2360) to be deleted for freeing data memory capacity of the data memory (2030).
36. A camera apparatus as claimed in any one of claims 29 to 35, characterized in that the one or more software applications (2200) are operable when executed upon the computing hardware (2020) to enable the wireless communication device (2010) to upload one or more video clips from the data memory (2020) to one or more remote proxy servers (2500) and to manipulate the one or more video clips uploaded to the one or more proxy servers (2500) via user instructions entered via the user graphical interface (2040).
37. A method of implementing a camera apparatus using a wireless communication device (2010) incorporating computing hardware (2020) coupled to a data memory (2030), to a wireless communication interface (2050) for communicating data from and to the wireless communication device (2010), to a graphical user interface (2040) for receiving user input, and to an optical imaging sensor (2060) for receiving captured image data therefrom, wherein the computing hardware (2010) is operable to execute one or more software applications (2200) for enabling the optical imaging sensor (2060) to capture one or more images, and for storing corresponding image data in the data memory (2030) and/or for communicating the corresponding image data from the wireless communication device (2010) via its wireless communication interface (2050), wherein the wireless communication device (10) has an elongate external enclosure having a longest dimension (L) defining a direction of a corresponding elongate axis (2110) for the wireless communication device (2010), characterized in that said method includes:
(a) employing the one or more software applications (2200) to enable the wireless communication device (2010) to capture images when the wireless communication device (2010) is operated by its user such that the elongate axis (21 10) is orientated in substantially an upward direction, wherein the one or more software applications (2200) are employed to cause the computing hardware (2020) to select sub-portions (2230) of captured images (2220) provided from the optical imaging sensor (2060) and to generate corresponding rotated versions (2240) of the selected sub-portions to generate image data for storing in the data memory (2030) and/or for communicating via the wireless communication interface (2050); and
(b) employing the one or more software applications (2200) to enable the wireless communication device (2010) to capture the one or more images as one or more video clips (2255) in response to the user providing tactile input at an active region the graphical user interface (2040), wherein each video clip (2255) is of short duration (D) and is a self-contained temporal sequence of images.
38. A method as claimed in claim 37, characterized in that the short duration (D) is in a range of 1 second to 20 seconds, more preferable in a range of 1 second to 10 seconds, and most preferable substantially 3 seconds.
39. A method as claimed in claim 37 or 38, characterized in that the method includes using a sensor arrangement (2245) of the wireless communication device (2010) for sensing an angular orientation of the elongate axis (2110) of the wireless communication device (2010) and generating a corresponding angle indicative signal (2250), and employing the one or more software applications (200) to cause the computing hardware (2020) to receive the angle indicative signal (2250) and to rotate the sub-portions (2230) of the captured images so that they appear when viewed to be upright and stable images (2240).
40. A method as claimed in any one of claims 37 to 39, characterized in that the method includes employing the one or more software applications (2200) when executed upon the computing hardware (2020) to present one or more icons (2310) representative of video clips upon the graphical user interface (2040), and one or more icons (2360, 2370, 2380) representative of sorting bins into which the one or more icons representative (2310) of video clips are susceptible to being sorted, wherein sorting of the one or more icons (2310) representative of video clips into the one or more icons (2360, 2370, 2380) representative of sorting bins in invoked by a user swiping motion executed by a thumb or finger of the user on the user graphical interface (2040), wherein a given icon (2310) representative of a corresponding video clip is defined at a beginning of the swiping motion and a destination sorting bin (2360, 2370, 2380) for the selected icon representative of a corresponding video clip is defined at an end of the swiping motion.
41. A method as claimed in claim 40, characterized in that the method includes employing the one or more software applications (2200) executing upon the computing hardware (2020) to cause the one or more icons (2310) representative of video clips upon the graphical user interface (2040) to be sorted to be presented in a scrollable array along a longest length dimension (2320) of the graphical user interface (2040).
42. A method as claimed in claim 41 , characterized in that the method includes employing the one or more software applications (2200) executing upon the computing hardware (2020) to cause the one or more icons (2310) representative of video clips upon the graphical user interface (2040) to be sorted to be presented in a spatial arrangement indicative of a time at which the video clips were captured by the optical imaging sensor (2060).
43. A method as claimed in claim 40, characterized in that the method includes employing the one or more software applications (2200) to cause the at least one of the one or more icons (2360, 2370, 2380) representative of sorting bins, into which the one or more icons (2310) representative of video clips are susceptible to being sorted, to be a trash bin (2360), wherein the computing hardware (2020) is operable is present the user with a graphical representation option for emptying the trash bin to cause data stored in the data memory (2030) corresponding to contents of the trash, bin to be deleted for freeing data memory capacity of the data memory (2030).
44. A method as claimed in any one of claims 37 to 43, characterized in that the method includes employing the one or more software applications (2200) when executed upon the computing hardware (2020) to enable the wireless communication device (2010) to upload one or more video clips from the data memory (2020) to one or more remote proxy servers (2500) and to manipulate the one or more video clips uploaded to the one or more proxy servers (2500) via user instructions entered via the user graphical interface (2040).
45. A software product (2200) recorded on machine-readable data storage media, characterized in that the software product is executable upon computing hardware (2020) for implementing a method as claimed in any one of claims 37 to 44.
46. A software product as claimed in claim 45, characterized in that the software product (2200) is downloadable from an App store (2090) to a wireless communication device (2010) including the computing hardware (2020).
PCT/EP2013/002917 2012-09-28 2013-09-28 System for video clips WO2014048576A2 (en)

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
GB1217339.9A GB2506398B (en) 2012-09-28 2012-09-28 Camera apparatus
GB1217355.5A GB2506399A (en) 2012-09-28 2012-09-28 Video clip editing system using mobile phone with touch screen
GBGB1217355.5 2012-09-28
GB201217407A GB2506416A (en) 2012-09-28 2012-09-28 Media distribution system
GBGB1217407.4 2012-09-28
GBGB1217339.9 2012-09-28
US13/705,088 2012-12-04
US13/705,053 2012-12-04
US13/705,053 US20140096002A1 (en) 2012-09-28 2012-12-04 Video clip editing system
US13/705,070 US20140095291A1 (en) 2012-09-28 2012-12-04 Media distribution system
US13/705,070 2012-12-04
US13/705,088 US8948812B2 (en) 2012-09-28 2012-12-04 Camera apparatus

Publications (2)

Publication Number Publication Date
WO2014048576A2 true WO2014048576A2 (en) 2014-04-03
WO2014048576A3 WO2014048576A3 (en) 2014-10-09

Family

ID=50389069

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2013/002917 WO2014048576A2 (en) 2012-09-28 2013-09-28 System for video clips

Country Status (1)

Country Link
WO (1) WO2014048576A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017008041A1 (en) * 2015-07-08 2017-01-12 Roofshoot, Inc. Gamified video listing application with scaffolded video production
CN113411619A (en) * 2021-04-30 2021-09-17 成都东方盛行电子有限责任公司 Non-programming streaming media film examination method in field of broadcast television
WO2022194038A1 (en) * 2021-03-16 2022-09-22 北京字跳网络技术有限公司 Music extension method and apparatus, electronic device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080155420A1 (en) * 2006-12-22 2008-06-26 Apple Inc. Anchor point in media
WO2012094124A1 (en) * 2011-01-04 2012-07-12 Thomson Licensing Sequencing content

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080155420A1 (en) * 2006-12-22 2008-06-26 Apple Inc. Anchor point in media
WO2012094124A1 (en) * 2011-01-04 2012-07-12 Thomson Licensing Sequencing content

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GIRGENSOHN A ET AL: "A SEMI-AUTOMATIC APPROACH TO HOME VIDEO EDITING", PROCEEDINGS OF THE 2000 ACM SIGCPR CONFERENCE. CHICAGO. IL, APRIL 6 - 8, 2000; [ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY], NEW YORK, NY : ACM, US, 5 November 2000 (2000-11-05), pages 81-89, XP001171595, DOI: 10.1145/354401.354415 ISBN: 978-1-58113-212-0 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017008041A1 (en) * 2015-07-08 2017-01-12 Roofshoot, Inc. Gamified video listing application with scaffolded video production
WO2022194038A1 (en) * 2021-03-16 2022-09-22 北京字跳网络技术有限公司 Music extension method and apparatus, electronic device, and storage medium
CN113411619A (en) * 2021-04-30 2021-09-17 成都东方盛行电子有限责任公司 Non-programming streaming media film examination method in field of broadcast television
CN113411619B (en) * 2021-04-30 2023-06-20 成都东方盛行电子有限责任公司 Non-editing engineering streaming media film-examination method in broadcast television field

Also Published As

Publication number Publication date
WO2014048576A3 (en) 2014-10-09

Similar Documents

Publication Publication Date Title
US10123068B1 (en) System, method, and program product for generating graphical video clip representations associated with video clips correlated to electronic audio files
US9940970B2 (en) Video remixing system
US9021357B2 (en) System and method for enabling collaborative media stream editing
US8774598B2 (en) Method, apparatus and system for generating media content
JP5711355B2 (en) Media fingerprint for social networks
US7203702B2 (en) Information sequence extraction and building apparatus e.g. for producing personalised music title sequences
US20180098022A1 (en) Storage and editing of video of activities using sensor and tag data of participants and spectators
US20140096002A1 (en) Video clip editing system
US20100253764A1 (en) Method and system for customising live media content
JP2004357272A (en) Network-extensible and reconstruction-enabled media device
WO2003088665A1 (en) Meta data edition device, meta data reproduction device, meta data distribution device, meta data search device, meta data reproduction condition setting device, and meta data distribution method
US8619150B2 (en) Ranking key video frames using camera fixation
KR101117915B1 (en) Method and system for playing a same motion picture among heterogeneity terminal
JP2009004999A (en) Video data management device
WO2014048576A2 (en) System for video clips
JP5037483B2 (en) Content playback apparatus, content playback method, content playback processing program, and computer-readable recording medium
JP5112901B2 (en) Image reproducing apparatus, image reproducing method, image reproducing server, and image reproducing system
WO2014103374A1 (en) Information management device, server and control method
WO2009044351A1 (en) Generation of image data summarizing a sequence of video frames
KR102261221B1 (en) System And Method For Obtaining Image Information, And Method For Displaying Image Acquisition Information
JP5544030B2 (en) Clip composition system, method and recording medium for moving picture scene
Cremer et al. Machine-assisted editing of user-generated content
JP6038256B2 (en) Image search system and image search method
KR20100018987A (en) Method for contraction information of multimedia digital contents
JP5851375B2 (en) Image search system and image search method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13786414

Country of ref document: EP

Kind code of ref document: A2

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 13/08/2015)

122 Ep: pct application non-entry in european phase

Ref document number: 13786414

Country of ref document: EP

Kind code of ref document: A2