US20200175762A1 - Method for making a content sensitive video - Google Patents

Method for making a content sensitive video Download PDF

Info

Publication number
US20200175762A1
US20200175762A1 US16/533,384 US201916533384A US2020175762A1 US 20200175762 A1 US20200175762 A1 US 20200175762A1 US 201916533384 A US201916533384 A US 201916533384A US 2020175762 A1 US2020175762 A1 US 2020175762A1
Authority
US
United States
Prior art keywords
flight path
flight
virtual camera
attributes
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/533,384
Inventor
Roberto Mariani
Richard Claude Georges Leon Roussel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SKYDOOR Pte Ltd
Original Assignee
SKYDOOR Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SKYDOOR Pte Ltd filed Critical SKYDOOR Pte Ltd
Priority to US16/533,384 priority Critical patent/US20200175762A1/en
Publication of US20200175762A1 publication Critical patent/US20200175762A1/en
Priority to US17/109,649 priority patent/US20210327141A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel

Definitions

  • the object of the invention is to provide a solution that overcomes the above disadvantages or at least provides a novel method for making a video.
  • a method for recording a video.
  • the method comprises receiving one or more user content and providing a 3D virtual world and a virtual camera having one or more parameters.
  • the optimal 3D flight path of the virtual camera is then determined based on the one or more user content.
  • the virtual camera is then allowed to travel along the optimal 3D flight path and to record the video.
  • the method further comprises the step of modifying the one or more parameters of the virtual camera based on the one or more user content.
  • the step of determining the optimal 3D flight path comprises providing a plurality of 3D flight paths and extracting a plurality of attributes from the one or more user content. Each of the plurality of 3D flight paths are then optimized based on the plurality of attributes. An aesthetic function is then used to calculate an aesthetic quality of each of the optimized 3D flight paths. The optimized 3D flight path with the highest aesthetic quality is the selected as the optimal 3D flight path.
  • the step of optimizing the one or more segments comprises deriving a plurality of recommended deformations for the one or more segments based on the plurality of attributes.
  • the recommended deformation is then selected using dynamic programming and the one or more segments are then deformed based on the selected recommended deformation.
  • the one or more user content comprises an audio track.
  • a method for creating a new flight path. The method comprises providing a first flight path and a second flight path and concatenating the first flight path to the second flight path to create a new flight path.
  • a method for creating a new flight path. The method comprises providing a first flight path and a second flight path, extracting a subpath from the first flight path and concatenating the subpath to the second flight path to create a new flight path.
  • a computer program for instructing a computer to perform any of the methods as described herein.
  • a computer readable medium having the computer program as described herein.
  • FIG. 1 is a diagram illustrating a virtual camera on a flight path in a 3D virtual world
  • FIG. 3 is a diagram illustrating a relevant discrete point, the preceding discrete point and the subsequent discrete point;
  • FIG. 4 is a diagram illustrating two relevant discrete points, and their preceding discrete points and subsequent discrete points.
  • FIG. 1 shows a flight path 101 of a virtual camera 102 in a 3D virtual world.
  • a particular 3D virtual world can have a plurality of flight paths 101 tailored or associated to it.
  • Virtual camera 102 has a field of view 105 . This field of view 105 is what the virtual camera 102 “sees” and is what the virtual camera 102 records at any one time.
  • one or more screens 103 may enter the virtual camera's 102 field of view 105 .
  • the screens 103 that come into the virtual camera's 102 field of view 105 in a flight path 101 is henceforth referred to as the screens 103 along the flight path 101 .
  • the virtual camera 102 will record the multimedia content 104 which are displayed on the screens 103 .
  • Virtual camera positions 106 , 107 and 108 are the positions along the flight path 101 where the virtual camera 102 will be momentarily stationary such that it can record the multimedia content 104 on the screens 103 .
  • FIG. 2 shows a method for producing a 2D video.
  • a software program is provided such that a user can use the software program to upload multimedia content (images and videos) and audio tracks.
  • the software program also provides a database of multimedia content and audio tracks for the user to choose from.
  • the software program extracts the attributes of the uploaded or selected multimedia content (images or videos) and audio tracks.
  • the software program runs image processing techniques to extract attributes of the images. Examples of image processing techniques include Face detection, Character Recognition, Geometric shape detection, natural texture labeling. Attributes of the image can be the content of the image for example, the image has “2 faces that is 30 ⁇ 40 pixels wide” and “3 lines of text that is 12 pixels per inch”.
  • the software program uses video analytical tools to break the video down to a series of frames (akin to images) and extract attributes of these frames. Other attributes of the video like the length of video, and video frame rate are also extracted.
  • the software program uses audio and pitch analysis tools to extract audio signal attributes like pitch and tempo. These audio signal attributes can be used by the software program to trigger special effects in the 3D virtual world.
  • the software program can access an entire database of special effects that corresponds to each audio signal attribute. For example, if the audio track is a Mozart piece with a slow tempo and low pitch, the software program will insert butterflies into the 3D virtual world. If the audio track is a rock track with fast tempo and high pitch the software program will insert fireworks into the 3D virtual world.
  • the software program provides a database of 3D virtual worlds for the user to select from.
  • 3D virtual worlds can be a New York City virtual world and a football stadium virtual world.
  • a search function can be provided to the user such that a user can enter his search criteria and the software program will return a filtered list of 3D virtual worlds. For example, a user can enter the word “stadium” into the search function, and the software program returns a basketball stadium 3D virtual world, a football stadium 3D virtual world etc.
  • the 3D virtual worlds that are provided by the software program to the user to select from may also be dependent on the multimedia content uploaded/selected by the user.
  • the software program may only shortlist to the user 3D virtual worlds which have 3 screens.
  • the software program may also modify an existing 3D virtual world by adding or deleting screens to tally the number of screens with the number of multimedia content uploaded by the user, and shortlist this modified 3D virtual world to the user.
  • the software program provides a plurality of virtual cameras for the user to select from.
  • the appearance and parameters of the virtual cameras can mimic actual video camera models from Sony, Canon etc. so that users can choose to use virtual cameras that they are accustomed to.
  • the virtual cameras can also have infrared or thermal options such that the filming of the 2-D video is seen to be in the infrared or thermal spectrum.
  • the software program filters or shortlists a plurality of possible flight paths out of a database of flight paths.
  • the shortlisted flight paths can include the flight paths tailored to the selected 3D virtual world.
  • the shortlisted flight paths can also include flight paths that are not tailored to the selected 3D virtual world but tailored to other 3D virtual worlds.
  • the shortlisted flight paths can also contain flight paths (out of the shortlisted flight paths) that have been modified by the software program based on certain criteria. For instance, if the number of screens in a shortlisted flight path exceeds the number of multimedia content uploaded by a user, the shortlisted flight path may be modified such that the virtual camera travels to only the screens that display the multimedia content.
  • the shortlisted flight path may be modified such that the flight path reroutes back to screens it has already passed such that different multimedia content can be displayed on the same screen across a period of time.
  • the number of screens in a flight path is two (Screen 1 and 2) and the number of multimedia content uploaded by a user is three (Image 1, 2 and 3).
  • Image 1 and 3 will be displayed on Screen 1 at different points in time and Image 2 will be displayed on Screen 2.
  • the flight path will therefore comprise the virtual camera travelling towards Screen 1 displaying Image 1, and thereafter to Screen 2 displaying Image 2.
  • the flight path will then result in the virtual camera travelling back to Screen 1 displaying Image 3. If the multimedia contents are sequential power point slides, the display order of the multimedia content should be adhered to.
  • the virtual camera For each flight path, the virtual camera will have a set of default virtual camera parameter values (such as speed, pan and tilt) at different regions of the flight path.
  • the virtual camera parameters can be adapted based on the 3D virtual world chosen.
  • the pan and tilt of the virtual camera should be changed such that the screens would come into the field of view of the virtual camera.
  • the speed of the virtual camera should also be changed in light of the positions of the screens. For example, at the regions of the flight path where the screen is partially inside or inside the virtual camera's field of view, the speed of the virtual camera is slowed down and the virtual camera may take a stationary position at the front of the screen.
  • the time taken in the flight path should adhere to the duration of the selected audio track. Therefore, the speed of the virtual camera as it travels through the flight path can be modified such that the time taken in the flight path adheres to the duration of the selected audio track.
  • the virtual camera parameters are also modified in light of the extracted attributes of the multimedia content. If the extracted attributes of the multimedia content is that it contains images of small text, the zoom of the virtual camera may be increased.
  • step 207 the software program deforms the discrete points of the shortlisted flight paths based on the extracted attributes of the images and videos.
  • the software program does this by first selecting discrete points on the shortlisted flight paths for the deformation.
  • FIG. 3 illustrates this.
  • the software program would select the most relevant discrete points along the shortlisted flight path 301 for deformation i.e. points on the shortlisted flight path 301 where the screen 303 is in the field of view of the virtual camera 304 . This is because these discrete points with have the most impact on the filming by the virtual camera 304 of the multimedia content 302 that will be shown on the screens 303 .
  • discrete point 305 is chosen for deformation.
  • the segment of the flight path (can be in non-linear or linear) between the preceding discrete point 306 and the subsequent discrete point 307 is analyzed for deformation.
  • the software program performs this analysis by first accessing a table of predefined attributes with magnitudes and their corresponding recommended deformations as shown in Table 1 below.
  • Table 2 below shows the predefined attributes.
  • Attribute A Size of face is 30 ⁇ 50 pixels
  • Predefined Attribute B Size of face is 40 ⁇ 60 pixels
  • Predefined Attribute C Size of text is 12 pixels per inch . . . .
  • Predefined Attribute Z is
  • Magnitude is a number between 0 and 1, and is a gauge of the extent of how close the extracted attribute of the image or video matches the predefined attribute.
  • the predefined attributes, magnitudes and corresponding recommended deformations are trained data that are stored in a database which the software program can access.
  • the magnitude of the predefined attributes for the segment of the flight path between the preceding discrete point 306 and the subsequent discrete point 307 are determined.
  • the recommended deformations for each predefined attribute are obtained by referencing Table 1. This is shown in Table 3 below.
  • the recommended deformations are then normalized (averaged) 1/Z ⁇ recommended deformation to obtain a plurality of normalized deformations for the segment of the flight path between the preceding discrete point 306 and the subsequent discrete point 307 .
  • Each normalized deformation for the segment is assigned a local aesthetic score (between 0 and 1). The process is then repeated for each relevant discrete point.
  • the shortlisted flight path is optimized. This optimization is done by selecting a normalized deformation with the best overall score.
  • the overall score for each normalized deformation is calculated by weighting the local aesthetic score of a normalized deformation with a transition score.
  • a transition score (between 0 and 1) is the score given to the suitability of two consecutive normalized deformations.
  • the concept is that the overall deformation of the flight path must be considered as opposed to just considering the local (segment) deformations in isolation.
  • FIG. 4 shows a 3D virtual world 400 , a shortlisted flight path 401 , and screens 402 , 403 and 410 . In this case, relevant discrete points 405 and 408 are selected for deformation.
  • the segment of the flight path between discrete points 404 and 406 will be analyzed based on the multimedia content displayed on screen 402
  • the segment between discrete points 407 and 409 will be analyzed based on the multimedia content displayed on screen 403
  • the segment of the flight path between discrete points 411 and 413 will be analyzed based on the multimedia content displayed on screen 410 .
  • an optimal aesthetic function is used to calculate the aesthetic value of the optimized shortlisted flight paths, and the optimized shortlisted flight path with the highest aesthetic value is chosen as the optimal flight path.
  • the optimal aesthetic function uses a dynamic programming technique (Bellman) where a weighted Direct Acyclic Graph (DAG) is constructed.
  • the vertices are the local deformation hypothesises ⁇ X 11 , X 12 , . . . X 1n , X 21 . . . X 2n , . . . X k,n ⁇ associated respectively to the segment to be deformed ⁇ X 1 , X 2 , . . . X k ⁇ and the edges linking two consecutive hypothesises (X in , X jn ) are weighted by the likelihood of the transition.
  • step 210 the virtual camera travels on optimal flight path to record the 2D video.
  • one advantage conferred by the described invention is that it provides a relatively simple means of producing unique videos. That is, users with two different sets of multimedia content using the same 3D virtual world will obtain two different videos (for example, an end-user, an advertiser, or an event organizer).
  • an online provisioning of the described invention would allow near immediate production and rendering of a video construction for customers such as MTV or advertisers at a very low cost.
  • a functional production engine takes multimedia content and a 3D virtual world and automatically performs customized rendering of the flight path.
  • the production technology could be provided in an online-only version, where finished productions are offered for sale, a standalone version that is sold to end users, or a combination stand-alone version and on-line version that can be expanded through the use of information collection to generate metrics and statistics associated with, for example, tool use, that would provide recommendations and the like and allow for optimized tool offering and other enhancements.
  • Metrics and statistics and other feedback generated from, for example, pre-completed productions with standard path assignments in combination with user feedback could also be used to improve product quality, could be offered in connection with other information, such as demographic information, or the like to generate marketing information, or the like.
  • the inventive exemplary method and apparatus described herein be used for 3D PowerPoint presentations (companies), 3D consumer electronics ads (Blackberry video), 3D event promotions (MDA, TechVenture 2010), music promotions (Viacom, MTV, Sony, EMI), books promotions (Amazon.com), concert ticketing (Guns and Roses video) and sports events (World Cup video). Still further, the invention can also be used for daily changes of advertisements and for user-generated advertisements. Globally, implementations in accordance with embodiments can accelerate the productions of high-impact videos for event promotion, online visibility increase and immediate, impactful, viral marketing.
  • the methods described herein can enable users to produce their own customized videos, with their own customized 3D paths and customized moods, computed from the analysis of the multimedia content that they upload in the 3D world.
  • Embodiments described herein can also be used in technology for e-Advertisers, web, e-cards, YouTube, Picassa, PowerPoint, or the like.
  • Another aspect of the invention is how new flight paths are created to build up the pool of flights paths in the database.
  • One way is to concatenate existing flight paths together to create new flight paths.
  • Another way is to extract sub-paths from existing flight paths and then concatenate the sub-paths to an existing path to create new flight paths (referred to as splicing).
  • the optimized flight paths can also become new flights paths in the database.

Abstract

A method is described for recording a video. The method comprises receiving one or more user content and then providing a 3D virtual world and a virtual camera having one or more parameters. The optimal 3D flight path of the virtual camera is then determined based on the user content and the virtual camera is then allowed to travel along the optimal 3D flight path and to record the video.

Description

  • Therefore, the object of the invention is to provide a solution that overcomes the above disadvantages or at least provides a novel method for making a video.
  • SUMMARY OF INVENTION
  • According to an embodiment, a method is described for recording a video. The method comprises receiving one or more user content and providing a 3D virtual world and a virtual camera having one or more parameters. The optimal 3D flight path of the virtual camera is then determined based on the one or more user content. The virtual camera is then allowed to travel along the optimal 3D flight path and to record the video.
  • In another embodiment, the method further comprises the step of modifying the one or more parameters of the virtual camera based on the one or more user content.
  • In another embodiment, the step of determining the optimal 3D flight path comprises providing a plurality of 3D flight paths and extracting a plurality of attributes from the one or more user content. Each of the plurality of 3D flight paths are then optimized based on the plurality of attributes. An aesthetic function is then used to calculate an aesthetic quality of each of the optimized 3D flight paths. The optimized 3D flight path with the highest aesthetic quality is the selected as the optimal 3D flight path.
  • In another embodiment, the one or more user content comprises one or more multimedia content and the step of optimizing each of the plurality of 3D flight paths comprises selecting one or more segments on the 3D flight path and optimizing the one or more segments based on the plurality of attributes. The 3D flight path is then deformed based on the one or more optimized segments.
  • In another embodiment, the step of optimizing the one or more segments comprises deriving a plurality of recommended deformations for the one or more segments based on the plurality of attributes. The recommended deformation is then selected using dynamic programming and the one or more segments are then deformed based on the selected recommended deformation.
  • In another embodiment, the one or more user content comprises an audio track.
  • In another embodiment, a method is described for creating a new flight path. The method comprises providing a first flight path and a second flight path and concatenating the first flight path to the second flight path to create a new flight path.
  • In another embodiment, a method is described for creating a new flight path. The method comprises providing a first flight path and a second flight path, extracting a subpath from the first flight path and concatenating the subpath to the second flight path to create a new flight path.
  • In another embodiment, a computer program is described for instructing a computer to perform any of the methods as described herein.
  • In another embodiment, a computer readable medium is described having the computer program as described herein.
  • The invention will now be described in detail with reference to the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In order that embodiments of the invention may be fully and more clearly understood by way of non-limitative examples, the following description is taken in conjunction with the accompanying drawings in which like reference numerals designate similar or corresponding elements, regions and portions, and in which:
  • FIG. 1 is a diagram illustrating a virtual camera on a flight path in a 3D virtual world;
  • FIG. 2 is a flow chart illustrating a method to record a video;
  • FIG. 3 is a diagram illustrating a relevant discrete point, the preceding discrete point and the subsequent discrete point;
  • FIG. 4 is a diagram illustrating two relevant discrete points, and their preceding discrete points and subsequent discrete points.
  • DETAILED DESCRIPTION
  • Referring to the drawings, FIG. 1 shows a flight path 101 of a virtual camera 102 in a 3D virtual world. A particular 3D virtual world can have a plurality of flight paths 101 tailored or associated to it. Virtual camera 102 has a field of view 105. This field of view 105 is what the virtual camera 102 “sees” and is what the virtual camera 102 records at any one time. In the 3D virtual world, there are 3D objects, and the surface on a 3D object where multimedia content 104 (which includes images and videos) is displayed is called a screen 103.
  • When moving along the flight path 101, one or more screens 103 may enter the virtual camera's 102 field of view 105. The screens 103 that come into the virtual camera's 102 field of view 105 in a flight path 101 is henceforth referred to as the screens 103 along the flight path 101. As the virtual camera 102 travels along flight path 101 and when the screens 103 come into the virtual camera's 102 field of view 105, the virtual camera 102 will record the multimedia content 104 which are displayed on the screens 103. Virtual camera positions 106, 107 and 108 are the positions along the flight path 101 where the virtual camera 102 will be momentarily stationary such that it can record the multimedia content 104 on the screens 103.
  • In accordance with a preferred embodiment of the invention, FIG. 2 shows a method for producing a 2D video. In Step 201, a software program is provided such that a user can use the software program to upload multimedia content (images and videos) and audio tracks. Alternatively, the software program also provides a database of multimedia content and audio tracks for the user to choose from.
  • In step 202, the software program extracts the attributes of the uploaded or selected multimedia content (images or videos) and audio tracks. For images, the software program runs image processing techniques to extract attributes of the images. Examples of image processing techniques include Face detection, Character Recognition, Geometric shape detection, natural texture labeling. Attributes of the image can be the content of the image for example, the image has “2 faces that is 30×40 pixels wide” and “3 lines of text that is 12 pixels per inch”. For videos, the software program uses video analytical tools to break the video down to a series of frames (akin to images) and extract attributes of these frames. Other attributes of the video like the length of video, and video frame rate are also extracted.
  • The software program uses audio and pitch analysis tools to extract audio signal attributes like pitch and tempo. These audio signal attributes can be used by the software program to trigger special effects in the 3D virtual world. The software program can access an entire database of special effects that corresponds to each audio signal attribute. For example, if the audio track is a Mozart piece with a slow tempo and low pitch, the software program will insert butterflies into the 3D virtual world. If the audio track is a rock track with fast tempo and high pitch the software program will insert fireworks into the 3D virtual world.
  • In step 203, the software program provides a database of 3D virtual worlds for the user to select from. Examples of 3D virtual worlds can be a New York City virtual world and a football stadium virtual world. A search function can be provided to the user such that a user can enter his search criteria and the software program will return a filtered list of 3D virtual worlds. For example, a user can enter the word “stadium” into the search function, and the software program returns a basketball stadium 3D virtual world, a football stadium 3D virtual world etc. The 3D virtual worlds that are provided by the software program to the user to select from may also be dependent on the multimedia content uploaded/selected by the user. For example, if the multimedia content consists of 3 images, the software program may only shortlist to the user 3D virtual worlds which have 3 screens. Alternatively, the software program may also modify an existing 3D virtual world by adding or deleting screens to tally the number of screens with the number of multimedia content uploaded by the user, and shortlist this modified 3D virtual world to the user.
  • In step 204, the software program provides a plurality of virtual cameras for the user to select from. The appearance and parameters of the virtual cameras can mimic actual video camera models from Sony, Canon etc. so that users can choose to use virtual cameras that they are accustomed to. The virtual cameras can also have infrared or thermal options such that the filming of the 2-D video is seen to be in the infrared or thermal spectrum.
  • In step 205, the software program filters or shortlists a plurality of possible flight paths out of a database of flight paths. The shortlisted flight paths can include the flight paths tailored to the selected 3D virtual world. The shortlisted flight paths can also include flight paths that are not tailored to the selected 3D virtual world but tailored to other 3D virtual worlds. The shortlisted flight paths can also contain flight paths (out of the shortlisted flight paths) that have been modified by the software program based on certain criteria. For instance, if the number of screens in a shortlisted flight path exceeds the number of multimedia content uploaded by a user, the shortlisted flight path may be modified such that the virtual camera travels to only the screens that display the multimedia content.
  • In an instance where the number of screens in a shortlisted flight path is less than the number of multimedia content uploaded by a user, the shortlisted flight path may be modified such that the flight path reroutes back to screens it has already passed such that different multimedia content can be displayed on the same screen across a period of time. To illustrate this with an example, the number of screens in a flight path is two (Screen 1 and 2) and the number of multimedia content uploaded by a user is three (Image 1, 2 and 3). Image 1 and 3 will be displayed on Screen 1 at different points in time and Image 2 will be displayed on Screen 2. The flight path will therefore comprise the virtual camera travelling towards Screen 1 displaying Image 1, and thereafter to Screen 2 displaying Image 2. The flight path will then result in the virtual camera travelling back to Screen 1 displaying Image 3. If the multimedia contents are sequential power point slides, the display order of the multimedia content should be adhered to.
  • For each flight path, the virtual camera will have a set of default virtual camera parameter values (such as speed, pan and tilt) at different regions of the flight path. In step 206, the virtual camera parameters can be adapted based on the 3D virtual world chosen. The pan and tilt of the virtual camera should be changed such that the screens would come into the field of view of the virtual camera. The speed of the virtual camera should also be changed in light of the positions of the screens. For example, at the regions of the flight path where the screen is partially inside or inside the virtual camera's field of view, the speed of the virtual camera is slowed down and the virtual camera may take a stationary position at the front of the screen.
  • The time taken in the flight path should adhere to the duration of the selected audio track. Therefore, the speed of the virtual camera as it travels through the flight path can be modified such that the time taken in the flight path adheres to the duration of the selected audio track. The virtual camera parameters are also modified in light of the extracted attributes of the multimedia content. If the extracted attributes of the multimedia content is that it contains images of small text, the zoom of the virtual camera may be increased.
  • In step 207, the software program deforms the discrete points of the shortlisted flight paths based on the extracted attributes of the images and videos. The software program does this by first selecting discrete points on the shortlisted flight paths for the deformation. FIG. 3 illustrates this. In FIG. 3, there is selected 3D virtual world 300, shortlisted flight path 301, and multimedia content 302 shown on screen 303. The software program would select the most relevant discrete points along the shortlisted flight path 301 for deformation i.e. points on the shortlisted flight path 301 where the screen 303 is in the field of view of the virtual camera 304. This is because these discrete points with have the most impact on the filming by the virtual camera 304 of the multimedia content 302 that will be shown on the screens 303.
  • In this illustration, discrete point 305 is chosen for deformation. The segment of the flight path (can be in non-linear or linear) between the preceding discrete point 306 and the subsequent discrete point 307 is analyzed for deformation. The software program performs this analysis by first accessing a table of predefined attributes with magnitudes and their corresponding recommended deformations as shown in Table 1 below.
  • TABLE 1
    Table of predefined attributes with magnitudes and
    their corresponding recommended deformations
    Predefined Recommended
    Attribute Magnitude Deformation
    A 1.0
    Figure US20200175762A1-20200604-P00001
    A 0.8
    Figure US20200175762A1-20200604-P00002
    A 0.0
    Figure US20200175762A1-20200604-P00003
    B 1.0
    Figure US20200175762A1-20200604-P00004
    B 0.8
    Figure US20200175762A1-20200604-P00005
    B 0.8
    Figure US20200175762A1-20200604-P00005
    Z
  • Table 2 below shows the predefined attributes.
  • TABLE 2
    Predefined Attributes
    Predefined Attribute A Size of face is 30 × 50 pixels
    Predefined Attribute B Size of face is 40 × 60 pixels
    Predefined Attribute C Size of text is 12 pixels per inch
    . .
    . .
    Predefined Attribute Z .
  • For each predefined attribute, and at a particular magnitude, there is a recommended deformation for the segment of the flight path between the preceding discrete point 306 and the subsequent discrete point 307. Magnitude is a number between 0 and 1, and is a gauge of the extent of how close the extracted attribute of the image or video matches the predefined attribute. The predefined attributes, magnitudes and corresponding recommended deformations are trained data that are stored in a database which the software program can access.
  • Based on the extracted attributes from multimedia content 302, the magnitude of the predefined attributes for the segment of the flight path between the preceding discrete point 306 and the subsequent discrete point 307 are determined. With the magnitudes, the recommended deformations for each predefined attribute are obtained by referencing Table 1. This is shown in Table 3 below.
  • TABLE 3
    Recommended deformations for each
    predefined attribute for a segment
    Predefined Recommended
    Attribute Magnitude Deformation
    A 0.8
    Figure US20200175762A1-20200604-P00006
    B 0.8
    Figure US20200175762A1-20200604-P00007
    C 0.5
    Figure US20200175762A1-20200604-P00008
    D 0.1
    Figure US20200175762A1-20200604-P00009
    Z
  • The recommended deformations are then normalized (averaged) 1/ZΣrecommended deformation to obtain a plurality of normalized deformations for the segment of the flight path between the preceding discrete point 306 and the subsequent discrete point 307. Each normalized deformation for the segment is assigned a local aesthetic score (between 0 and 1). The process is then repeated for each relevant discrete point.
  • In step 208, the shortlisted flight path is optimized. This optimization is done by selecting a normalized deformation with the best overall score. The overall score for each normalized deformation is calculated by weighting the local aesthetic score of a normalized deformation with a transition score. A transition score (between 0 and 1) is the score given to the suitability of two consecutive normalized deformations. The concept is that the overall deformation of the flight path must be considered as opposed to just considering the local (segment) deformations in isolation. FIG. 4 shows a 3D virtual world 400, a shortlisted flight path 401, and screens 402, 403 and 410. In this case, relevant discrete points 405 and 408 are selected for deformation. Therefore, the segment of the flight path between discrete points 404 and 406 will be analyzed based on the multimedia content displayed on screen 402, the segment between discrete points 407 and 409 will be analyzed based on the multimedia content displayed on screen 403, and the segment of the flight path between discrete points 411 and 413 will be analyzed based on the multimedia content displayed on screen 410.
  • In step 209, an optimal aesthetic function is used to calculate the aesthetic value of the optimized shortlisted flight paths, and the optimized shortlisted flight path with the highest aesthetic value is chosen as the optimal flight path. The optimal aesthetic function uses a dynamic programming technique (Bellman) where a weighted Direct Acyclic Graph (DAG) is constructed. In the DAG, the vertices are the local deformation hypothesises{X11, X12, . . . X1n, X21 . . . X2n, . . . Xk,n} associated respectively to the segment to be deformed {X1, X2, . . . Xk} and the edges linking two consecutive hypothesises (Xin, Xjn) are weighted by the likelihood of the transition.
  • In step 210, the virtual camera travels on optimal flight path to record the 2D video.
  • One skilled in the art can appreciate that one advantage conferred by the described invention is that it provides a relatively simple means of producing unique videos. That is, users with two different sets of multimedia content using the same 3D virtual world will obtain two different videos (for example, an end-user, an advertiser, or an event organizer).
  • One would also appreciate that an online provisioning of the described invention would allow near immediate production and rendering of a video construction for customers such as MTV or advertisers at a very low cost. A functional production engine takes multimedia content and a 3D virtual world and automatically performs customized rendering of the flight path. In alternative embodiments, the production technology could be provided in an online-only version, where finished productions are offered for sale, a standalone version that is sold to end users, or a combination stand-alone version and on-line version that can be expanded through the use of information collection to generate metrics and statistics associated with, for example, tool use, that would provide recommendations and the like and allow for optimized tool offering and other enhancements. Metrics and statistics and other feedback generated from, for example, pre-completed productions with standard path assignments in combination with user feedback could also be used to improve product quality, could be offered in connection with other information, such as demographic information, or the like to generate marketing information, or the like.
  • Another advantage is that the optimization of the flight path is automatically done according to intelligent, content sensitive, parameters such that meaningful videos can be produced. In an embodiment, the inventive exemplary method and apparatus described herein be used for 3D PowerPoint presentations (companies), 3D consumer electronics ads (Blackberry video), 3D event promotions (MDA, TechVenture 2010), music promotions (Viacom, MTV, Sony, EMI), books promotions (Amazon.com), concert ticketing (Guns and Roses video) and sports events (World Cup video). Still further, the invention can also be used for daily changes of advertisements and for user-generated advertisements. Globally, implementations in accordance with embodiments can accelerate the productions of high-impact videos for event promotion, online visibility increase and immediate, impactful, viral marketing. For example, in fast paced countries such as Singapore, which is in the process of going all 3D for many advertising, promotional and other content, the methods described herein can enable users to produce their own customized videos, with their own customized 3D paths and customized moods, computed from the analysis of the multimedia content that they upload in the 3D world. Embodiments described herein can also be used in technology for e-Advertisers, web, e-cards, YouTube, Picassa, PowerPoint, or the like.
  • Another aspect of the invention is how new flight paths are created to build up the pool of flights paths in the database. One way is to concatenate existing flight paths together to create new flight paths. Another way is to extract sub-paths from existing flight paths and then concatenate the sub-paths to an existing path to create new flight paths (referred to as splicing). The optimized flight paths can also become new flights paths in the database.
  • There are instances where a 3D virtual world is modified. In such circumstances, the flights paths in the database that are tailored or associated to the modified 3D virtual world must also be modified accordingly. For example, if in the 3D virtual world, a new virtual building is created, the associated flight paths must be altered such that they do not “go through” the new virtual building but “go around” it.
  • While exemplary embodiments pertaining to the invention have been described and illustrated, it will be understood by those skilled in the technology concerned that many variations or modifications involving particular design, implementation or construction are possible and may be made without deviating from the inventive concepts described herein.

Claims (10)

1) A method for recording a video comprising the steps of:—
receiving at least one user content;
providing a 3D virtual world;
providing a virtual camera having at least one parameter;
automatically determining the optimal 3D flight path of the virtual camera based on the at least one user content; and
allowing the virtual camera to travel along the optimal 3D flight path and to record the video.
2) The method of claim 1 further comprising the step of:—
modifying the at least one parameter of the virtual camera based on the at least one user content.
3) The method of claim 1 or 2 wherein the step of determining the optimal 3D flight path comprises the steps of:—
providing a plurality of 3D flight paths;
extracting a plurality of attributes from at the least one user content;
optimizing each of the plurality of 3D flight paths based on the plurality of attributes;
using an aesthetic function to calculate an aesthetic quality of each of the optimized plurality of 3D flight paths; and
selecting the optimized 3D flight path with the highest aesthetic quality as the optimal 3D flight path.
4) The method of claim 3 wherein the at least one user content comprises at least one multimedia content and the step of optimizing each of the plurality of 3D flight paths comprises the steps of:—
selecting at least one segment on the 3D flight path;
optimizing the at least one segment based on the plurality of attributes; and
deforming the 3D flight path based on the at least one optimized segment.
5) The method of claim 4 wherein the step of optimizing the at least one segment comprises the steps of:—
deriving a plurality of recommended deformations for the at least one segment based on the plurality of attributes;
selecting the recommended deformation using dynamic programming; and
deforming the at least one segment based on the selected recommended deformation.
6) The method of claim 3 wherein the at least one user content comprises an audio track.
7) A method for creating a new flight path comprising the steps of:—
providing a first flight path and a second flight path; and
concatenating the first flight path to the second flight path to create a new flight path.
8) A method for creating a new flight path comprising the steps of:—
providing a first flight path and a second flight path;
extracting a subpath from the first flight path; and
concatenating the subpath to the second flight path to create a new flight path.
9) A computer program for instructing a computer to perform the method of any one of claims 1 to 8.
10) A computer readable medium having the computer program of claim 9.
US16/533,384 2012-04-20 2019-08-06 Method for making a content sensitive video Abandoned US20200175762A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/533,384 US20200175762A1 (en) 2012-04-20 2019-08-06 Method for making a content sensitive video
US17/109,649 US20210327141A1 (en) 2012-04-20 2020-12-02 Method for making a content sensitive video

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
PCT/SG2012/000141 WO2013158034A1 (en) 2012-04-20 2012-04-20 A method for making a content sensitive video
US201414395591A 2014-12-03 2014-12-03
US16/533,384 US20200175762A1 (en) 2012-04-20 2019-08-06 Method for making a content sensitive video

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US14/395,591 Continuation US10373376B2 (en) 2012-04-20 2012-04-20 Method for making a content sensitive video
PCT/SG2012/000141 Continuation WO2013158034A1 (en) 2012-04-20 2012-04-20 A method for making a content sensitive video

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/109,649 Continuation US20210327141A1 (en) 2012-04-20 2020-12-02 Method for making a content sensitive video

Publications (1)

Publication Number Publication Date
US20200175762A1 true US20200175762A1 (en) 2020-06-04

Family

ID=49383823

Family Applications (3)

Application Number Title Priority Date Filing Date
US14/395,591 Expired - Fee Related US10373376B2 (en) 2012-04-20 2012-04-20 Method for making a content sensitive video
US16/533,384 Abandoned US20200175762A1 (en) 2012-04-20 2019-08-06 Method for making a content sensitive video
US17/109,649 Abandoned US20210327141A1 (en) 2012-04-20 2020-12-02 Method for making a content sensitive video

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/395,591 Expired - Fee Related US10373376B2 (en) 2012-04-20 2012-04-20 Method for making a content sensitive video

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/109,649 Abandoned US20210327141A1 (en) 2012-04-20 2020-12-02 Method for making a content sensitive video

Country Status (4)

Country Link
US (3) US10373376B2 (en)
IL (1) IL235202A0 (en)
SG (1) SG11201406496TA (en)
WO (1) WO2013158034A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5947707B2 (en) * 2012-12-27 2016-07-06 富士フイルム株式会社 Virtual endoscopic image display apparatus and method, and program
US10657729B2 (en) * 2018-10-18 2020-05-19 Trimble Inc. Virtual video projection system to synch animation sequences

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060132482A1 (en) * 2004-11-12 2006-06-22 Oh Byong M Method for inter-scene transitions

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7096428B2 (en) * 2001-09-28 2006-08-22 Fuji Xerox Co., Ltd. Systems and methods for providing a spatially indexed panoramic video
EP2413286A1 (en) * 2010-07-29 2012-02-01 LiberoVision AG Image processing method and device for instant replay
EP2643820B1 (en) * 2010-11-24 2018-01-24 Google LLC Rendering and navigating photographic panoramas with depth information in a geographic information system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060132482A1 (en) * 2004-11-12 2006-06-22 Oh Byong M Method for inter-scene transitions

Also Published As

Publication number Publication date
IL235202A0 (en) 2014-12-31
US20150086181A1 (en) 2015-03-26
US10373376B2 (en) 2019-08-06
WO2013158034A1 (en) 2013-10-24
SG11201406496TA (en) 2014-11-27
US20210327141A1 (en) 2021-10-21

Similar Documents

Publication Publication Date Title
KR101760962B1 (en) Video content-aware advertisement placement
US9554184B2 (en) Method and apparatus for increasing user engagement with video advertisements and content by summarization
US11625874B2 (en) System and method for intelligently generating digital composites from user-provided graphics
US20160050465A1 (en) Dynamically targeted ad augmentation in video
CN110390048A (en) Information-pushing method, device, equipment and storage medium based on big data analysis
CN109155136A (en) The computerized system and method for highlight are detected and rendered automatically from video
US20090327073A1 (en) Intelligent advertising display
WO2014142758A1 (en) An interactive system for video customization and delivery
US20100164989A1 (en) System and method for manipulating adverts and interactive
US10726443B2 (en) Deep product placement
CN102160084B (en) For splitting, classify object video and auctioning the automated procedure of the right of interactive video object
US20210327141A1 (en) Method for making a content sensitive video
KR20190075177A (en) Context-based augmented ad
Vempati et al. Enabling hyper-personalisation: Automated ad creative generation and ranking for fashion e-commerce
CN109977779B (en) Method for identifying advertisement inserted in video creative
KR20180024200A (en) Method, apparatus and computer program for providing search information from video
CN110446093A (en) A kind of video progress bar display methods, device and storage medium
US20160035016A1 (en) Method for experiencing multi-dimensional content in a virtual reality environment
US20150131967A1 (en) Computerized systems and methods for generating models for identifying thumbnail images to promote videos
US10984572B1 (en) System and method for integrating realistic effects onto digital composites of digital visual media
US11432046B1 (en) Interactive, personalized objects in content creator's media with e-commerce link associated therewith
Chen Real-time interactive micro movie placement marketing system based on discrete-event simulation
KR102407493B1 (en) Solution for making of art gallery employing virtual reality
US20220309279A1 (en) Computerized system and method for fine-grained event detection and content hosting therefrom
KR101871925B1 (en) Method, apparatus and computer program for providing search information from video

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION