WO2012100202A1 - Dynamic 2d and 3d gestural interfaces for audio video players capable of uninterrupted continuity of fruition of audio video feeds - Google Patents

Dynamic 2d and 3d gestural interfaces for audio video players capable of uninterrupted continuity of fruition of audio video feeds Download PDF

Info

Publication number
WO2012100202A1
WO2012100202A1 PCT/US2012/022088 US2012022088W WO2012100202A1 WO 2012100202 A1 WO2012100202 A1 WO 2012100202A1 US 2012022088 W US2012022088 W US 2012022088W WO 2012100202 A1 WO2012100202 A1 WO 2012100202A1
Authority
WO
WIPO (PCT)
Prior art keywords
self
void
nslog
multimedia streaming
animation
Prior art date
Application number
PCT/US2012/022088
Other languages
French (fr)
Inventor
Filippo Costanzo
Antonio Rossi
Original Assignee
Filippo Costanzo
Antonio Rossi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Filippo Costanzo, Antonio Rossi filed Critical Filippo Costanzo
Priority to US13/981,058 priority Critical patent/US20130332829A1/en
Publication of WO2012100202A1 publication Critical patent/WO2012100202A1/en
Priority to US15/000,361 priority patent/US20160239095A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present invention relates to remote control devices, more specifically to a remote control for portable electronic devices that is simple operate and operable with a single hand.
  • Gesture is considered any physical movement that an analog or digital system can sense and respond to without the aid of a interposed pointing device such as a mouse etc...
  • Apple is currently selling a device named "Apple TV", which in the next version of the iOS operating system (should be version numbered 4.3 at the moment of writing) will be capable of receiving, via wireless connection, audio-video content to be shown on TV screens using eventually an iPhone/iPod/iPad hand-held device to serve as an enhanced remote control for Apple TV's user interface.
  • Apple TV or Google TV, and so on
  • Such class of devices could also have, in the near future, the capability of receiving user input through a gestural interface that could be driven by an hardware comparable to the xBox 360 kinect mentioned above.
  • Such environments may include the aforementioned xBox 360 kinect system, and possibly to all the other cases of gestural enabled hardware.
  • This invention relates to a class of enhanced audio-video players capable of providing the experience of watching a nearly unlimited number of available audio-video feeds (pertaining to an event) from which the desired one can be interactively chosen, at any given moment, by the user, while the uninterrupted continuity of fruition of audio and video is maintained.
  • Possible embodiments of such players include on-screen playback choice of audio- video feeds of an event; the feeds pertaining to a discrete number of audio and video sources available for said event.
  • Other embodiments may include said discrete audio and video sources as well as a number of virtually unlimited vantage points of view obtained by: 1. the interpolation of said sources via real-time (or offline) 3D reconstruction and frame-rate coherent rendering of the scene 3D geometry pertaining to the event being depicted, 2. augmented audio-visual capture systems capable of acquiring full tridimensional information of an event at the desired sample rates. Therefore such players may provide a virtually unlimited number of viewpoint choices beyond the discrete limitation of the original source audio and video material. Said class of players might be used on a variety of digital devices and operate with local and/or streamed audio and video feeds.
  • the preferred embodiment of the present invention is related to said Apple devices, but the same concepts and methods could be easily applied to other environment, such for example Android based smart-phones and/or tablets, or other products as well.
  • the purpose of this invention is to create an interactive method of informing a gestural interface so to provide the user with the experience of effectively transitioning inside the tridimensional space of the scene while choosing the desired vantage point of view in the audio- video player.
  • the results might then be displayed on a screen or on a comparable 2D and 3D display device.
  • the present invention aims to provide the user with the feeling of "being there", placing her/him inside a simulated environment (for example a Theatre) in which she/he can choose from virtually unlimited points of views and (if available) listening positions.
  • a simulated environment for example a Theatre
  • the interaction between the user and the content (via the gestural interface) is extended to every element presented during the show; for example, in the preferred embodiment, the concurrent time-coding data processing also allows the user to exploit the gestural interface to "perform" as a "virtual director” - altering the timing of the audio-video feeds as in a slow motion effect, or as a "virtual conductor” - altering the tempo of an orchestra musical performance without modifying the audio pitch.
  • the desired level of interaction described in the present invention is obtained by means of an advanced gesture interface that calculates the relevant dimensional (space and time) data derived from the feeds (audio and visual positioning) and then interprets the user's input to determine the appropriate tridimensional path towards the desired direction (in 3D Space and/or Time). At which point an appropriate animation UI manages and produces the suitable screen transformations and effects in order to simulate the feeling of moving inside the space where the event being depicted occurs (or has occurred).
  • Scene analysis from imaging data via for example: structure from motion type of algorithms (S.I.F.T. - S.L.A.M. - http://photosynth.net/ etc..) or other comparable approach.
  • the purpose of this process is to infer a dynamic sample (per frame or any desired interval) of:
  • this is an xml file that can be dynamically updated at the required intervals (frame -rate or otherwise)
  • the time-coded information (expressed in the appropriate format and intervals - e.g. frames or timecode or subsamples thereof) can be used to drive time altering actions by the users (or system - e.g. editing list) on the audio-video feeds.
  • the Gesture Mapper assigns a path (among those available as instructed by the Scene Mapper) for performing the necessary tridimensional transformations to be applied to the current point of view (and listening position) to transition it into the one chosen by the users (or system - e.g. editing list).
  • User input can also be mapped to the time coded information actions allowed, for example time scaling (slow motion or fast forward with or without audio pitch alteration) etc...
  • Animation transitional elements are assigned, triggered and rendered along with the appropriate audio-video feeds for the user (or system - e.g. editing list) desired points of view (and listening position) to the device appropriate output e.g. viewport (screen or 3D display) - speakers etc...
  • the objective of having a virtually unlimited feeds without compromise in respects to the continuity of fruition of audio and video is challenging. It becomes even more challenging if we attempt to realize it using devices that have limited hardware resources such the
  • a user can, for example, interact with the player through a swipe or touch gesture.
  • This allows her/him to freely switch among a great number of available video feeds where the transitions between subsequent choices are animated in the view-port in a planar fashion relative to the device screen space (for example being in a centered position a gesture of swipe right will produce a slide transition to a camera to the right) all of this happening while the show (audio-video) continues uninterrupted.
  • the embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged.
  • a process is terminated when its operations are completed.
  • a process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.
  • a storage may represent one or more devices for storing data, including read-only memory (ROM), random access memory (RAM), magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine readable mediums for storing information.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk storage mediums magnetic disk storage mediums
  • optical storage mediums flash memory devices and/or other machine readable mediums for storing information.
  • machine readable medium includes, but is not limited to portable or fixed storage devices, optical storage devices, wireless channels and various other mediums capable of storing, containing or carrying instruction(s) and/or data.
  • embodiments may be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof.
  • the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium such as a storage medium or other storage(s).
  • One or more than one processor may perform the necessary tasks in series, concurrently or in parallel.
  • a code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or a combination of instructions, data structures, or program statements.
  • a code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted through a suitable means including memory sharing, message passing, token passing, network transmission, etc.
  • a first element, component, Class and/or method could be termed a second element, component, Class and/or method without departing from the teachings of example embodiments.
  • the "SingleAudio" class is driving the timeline to which all video feeds refer for synchronization, in the preferred embodiment the audio file is loaded into the "Main Bundle" of the App also in the case that the video files are streamed over a network (i.e. Internet) to assure that user listening experience is unaffected by communication failures, but if it is not a strict requirement the audio track could be equally loaded over the net.
  • a network i.e. Internet
  • AVURLAsset *audioAsset [AVURLAsset URLAssetWithURL:audioFileURL options:nil];
  • audioltem [AVPlayerltem playerItemWithAsset:audioAsset];
  • audioPlayer [AVPlayer playerWithPlayer Item: audioltem];
  • “SceneDescriptor” in this implementation it has hard coded source files for loading the required video sources and all the basic graphic elements such as the thumbnails.
  • stageFeedDistributor stageThumbnailsStreamsReaders
  • sourcesFileNames [NSArray arrayWithObjects: @ “sx”, ⁇ "hi”, ⁇ "my”, @ “ph”, @ “dx”, nil];
  • thumbnailsFileNames [NSArray arrayWithObjects: @ “sx-thumb”, ⁇ "hithumb”, @ "my-thumb”, @ “ph-thumb”, @ “dx-thumb”, nil];
  • stageFeedDistributor [[NS Mutable Array alloc]
  • stageThumbnailsStreamsReaders [[NS Mutable Array alloc]
  • initialVideoFeed INITI AL_VIDEO_FEED ;
  • NSURL *thumbnailFileURL [[NSBundle mainBundle]
  • playerltemWithAs set sourceAs set] ;
  • a "UserSessionManager” class coordinates relationships between user gestures and devices status and orientation, it is receiving inputs from the GUI (graphical user interface), it alternates two objects of a "StreamProducer” class in order to manage the presentation on the screen device of two alternating "StreamConsumer” objects, one user visible video at a given time and another one operating not visible in background that is provided for animating transitions, this last one swapping its role with the first at the end of a transition.
  • UserSessionManager header file code finished.
  • "UserSessionManager” class has a method defined as "-(void)assignFeed”; this is an algorithm that maps navigation rules between available feeds and user interaction via gestures.
  • Method "-(void)switchFeed” implements the logic for the two alternating "StreamProducer” class objects, of which only one is presented to the user at a given time, being the other always available for the animating transition as said before.
  • "UserSessionManager” class has two methods named “-(void)showPlayerControls” and “-(void)hidePlayerControls:(NSTimer *)theTimer”, which are managing the presentation on screen device of a GUI in which thumbnails of the available feeds can be shown, providing the user other interface elements with which he can interacts.
  • selector @ selectors witchFeed) name: @ "switchCamera" object:nil] ;
  • selector @ selector( showDidReachEnd) name: @ "showDidReachEnd” object:nil] ;
  • selector @ selector(pauseShow) name: @ "pauseButton” object:nil] ;
  • selector @ selector(resumeShow) name: @ "playButton” object:nil] ;
  • selector @ selector(rewindShow) name: @ "rewindButton” object:nil] ;
  • selector @ selector(showContentlnfo) name: @ "contentlnfoButton" object:nil] ;
  • selector @ selector(showDemoInfo) name: @ "demoInfoButton” object:nil] ;
  • selector @ selector(thumbZero) name: @ "thumbZero” objec nil];
  • info ViewController [[Info ViewController alloc] initWithNibName: nil bundle: nil] ;
  • infoViewController.wantsFullScreenLayout YES
  • CMTime currentTime [sharedS ingle Audio currentTime] ;
  • seekToTime [sharedS ingle Audio currentTime] ] ;
  • seekToTime [sharedS ingle Audio currentTime] ] ;
  • switchHasBeenReset YES; ⁇ ⁇ if (! switchHasBeenReset) ⁇
  • selector @ selector(hidePlayerControls:) object:nil] ;
  • “StreamProducer” class is invoked by “UserSessionManager” to produce the two main alternating objects for presenting video to the user. It contains the “Gesture Mapper” implementation that in the preferred embodiment is responsible for mapping the appropriate animations generated by the "Animation Engine” to user gesture actions. Methods named “- (void)animateContent(FromLeft/FromRight/FronTop/FromBottom)” to a transition happening from a first player to a second player.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method of manipulating a audio video visualization in a multi dimensional virtual environment implemented in a computer system having a display unit for displaying the virtual environment and a gesture driven interface, said method manipulating the visualization in response to predetermined user gestures and movements identified by the gesture driven interface.

Description

DYNAMIC 2D AND 3D GESTURAL INTERFACES FOR AUDIO VIDEO
PLAYERS CAPABLE OF UNINTERRUPTED CONTINUITY OF FRUITION
OF AUDIO VIDEO FEEDS
CROSS-REFERENCE TO RELATED APPLICATION
[1] This application claims the benefit of United States Provisional Patent Application
No. 61/435,277, entitled "Dynamic 2D And 3D Gestural Interfaces For Audio Video Players Capable Of Uninterrupted Continuity Of Fruition Of Audio Video Feeds" filed January 22, 2011, the contents of which are incorporated by reference herein in its entirety.
FIELD OF THE INVENTION
[2] The present invention relates to remote control devices, more specifically to a remote control for portable electronic devices that is simple operate and operable with a single hand.
BACKGROUND
[3] Gestural interfaces have gained increased popularity in the last few years. Consumer electronics manufacturers such as Nintendo, Apple, Nokia, Sony Ericsson, LG, and Microsoft have all released products that are controlled using interactive gestures.
[4] It is foreseeable that hundreds of millions of devices will soon have such interfaces.
"Gesture" is considered any physical movement that an analog or digital system can sense and respond to without the aid of a interposed pointing device such as a mouse etc...
[5] Current videogames interfaces already use free-form gestures to allow players to make movements in space that are then reflected in on-screen actions while Apple's iPhone and iPad employ touch screen devices that users control via a tap or a swipe of their fingertips.
[6] Other categories of hardware devices incorporating gesture driven interfaces can be found in videogames market; game consoles like the Microsoft xBox 360 use specific hardware
(kinect) capable of reading the user gestures through a sophisticated implementation of image recognition techniques and augmented (3D depth) camera acquisition.
[7] It is also foreseeable that these capabilities might expand in the future to other appliances beyond the realm of video-games consoles. Apple is currently selling a device named "Apple TV", which in the next version of the iOS operating system (should be version numbered 4.3 at the moment of writing) will be capable of receiving, via wireless connection, audio-video content to be shown on TV screens using eventually an iPhone/iPod/iPad hand-held device to serve as an enhanced remote control for Apple TV's user interface. It can be easily imagined that such class of devices (Apple TV or Google TV, and so on) could also have, in the near future, the capability of receiving user input through a gestural interface that could be driven by an hardware comparable to the xBox 360 kinect mentioned above.
[8] It is interesting to notice that at the present time, gestural interfaces are mostly exploited in specific application domains such web surfing, phone functions and games.
[9] These same interfaces are still grossly underutilized in the audio-video production and fruition domains, with the exception of few very basic level implementations. This might be caused perhaps by the traditional assumption that considers audio-video content as a passive form of entertainment generally capable of a very low level of interaction. On that note, possibly only the invention of the remote control could be considered as one of the significant milestones of the past few decades. As an example, audio-video players are currently available on the Apple iPhone/iPad/iPod class of devices. Yet there has been no substantial enhancement to the user-experience given by the available gestural interface capabilities as most of the functionality seems limited to a classic "play", "pause", "stop" etc.
[10] In the preferred embodiment described in the present document we are showing an example of an application developed for Apple iPad, said application is taking full advantage of the gestural interface capabilities available in said device.
[11] The same concepts here presented are anyway easily transferred to other
environments by a person that is ordinarily skilled in the art. Such environments may include the aforementioned xBox 360 kinect system, and possibly to all the other cases of gestural enabled hardware.
Summary of the Invention
[12] This invention relates to a class of enhanced audio-video players capable of providing the experience of watching a nearly unlimited number of available audio-video feeds (pertaining to an event) from which the desired one can be interactively chosen, at any given moment, by the user, while the uninterrupted continuity of fruition of audio and video is maintained.
[13] Possible embodiments of such players include on-screen playback choice of audio- video feeds of an event; the feeds pertaining to a discrete number of audio and video sources available for said event.
[14] Other embodiments may include said discrete audio and video sources as well as a number of virtually unlimited vantage points of view obtained by: 1. the interpolation of said sources via real-time (or offline) 3D reconstruction and frame-rate coherent rendering of the scene 3D geometry pertaining to the event being depicted, 2. augmented audio-visual capture systems capable of acquiring full tridimensional information of an event at the desired sample rates. Therefore such players may provide a virtually unlimited number of viewpoint choices beyond the discrete limitation of the original source audio and video material. Said class of players might be used on a variety of digital devices and operate with local and/or streamed audio and video feeds.
[15] The preferred embodiment of the present invention is related to said Apple devices, but the same concepts and methods could be easily applied to other environment, such for example Android based smart-phones and/or tablets, or other products as well.
[16] The purpose of this invention is to create an interactive method of informing a gestural interface so to provide the user with the experience of effectively transitioning inside the tridimensional space of the scene while choosing the desired vantage point of view in the audio- video player. The results might then be displayed on a screen or on a comparable 2D and 3D display device.
[17] Furthermore the present invention aims to provide the user with the feeling of "being there", placing her/him inside a simulated environment (for example a Theatre) in which she/he can choose from virtually unlimited points of views and (if available) listening positions.
[18] The interaction between the user and the content (via the gestural interface) is extended to every element presented during the show; for example, in the preferred embodiment, the concurrent time-coding data processing also allows the user to exploit the gestural interface to "perform" as a "virtual director" - altering the timing of the audio-video feeds as in a slow motion effect, or as a "virtual conductor" - altering the tempo of an orchestra musical performance without modifying the audio pitch.
[19] Imagine watching a symphonic orchestral performance during which you might be able - via an advanced gestural interface - to transition among multiple available vantage points of view indicating the direction and position the camera should move. You could experience the auditory environment as it would be perceived close to the violins or near the brass section. Furthermore you could as well, mimicking the gestures of an orchestra conductor, modify the execution by altering the tempo ("piano", "andante", "presto" etc..) of the musical performance and the loudness level.
[20] For a definition of audio pitch please see: http://en.wikipedia.org/wiki/Pitch_(music).
[21] Another source of a very powerful audio time stretching algorithm is available here: http://mpex.prosoniq.com/
[22] An implementation of such a kind of application can be seen in the "WII Music" video game by Nintendo, in which the Director is playing with an orchestra.
[23] It is crucial to point out that the content of this application is entirely computer generated (as in simulated by a computer hardware/software system and not relating to an actual real life event being depicted), so it is completely different from the field of the present invention which is instead related to uninterrupted switchable audio-video streaming content (locally stored or received via network/Internet).
[24] The desired level of interaction described in the present invention is obtained by means of an advanced gesture interface that calculates the relevant dimensional (space and time) data derived from the feeds (audio and visual positioning) and then interprets the user's input to determine the appropriate tridimensional path towards the desired direction (in 3D Space and/or Time). At which point an appropriate animation UI manages and produces the suitable screen transformations and effects in order to simulate the feeling of moving inside the space where the event being depicted occurs (or has occurred).
[25] The steps being described here, can be performed on the audio-video sources than can be obtained via the methods described above in the summary of the invention paragraph. Such sources might be available offline to be pre-processed or could be streamed and interpreted in real-time by the server and/or the client.
[26] The method is comprised of the following steps:
1. 3D Data Gathering:
Scene 3D Data - Analysis and/or Reconstruction
"Scene" is considered the tridimensional representation of the event and its locale as is possible to be determined via:
• Scene analysis from imaging data via for example: structure from motion type of algorithms (S.I.F.T. - S.L.A.M. - http://photosynth.net/ etc..) or other comparable approach.
• 3D sensors and 3D sensors augmented cameras (TOF [Time Of Flight] - http://www.illuminx.com - Microsoft Kinect, etc.).
• knowledge of cameras (and/or sensors) relevant parameters (which may include: interior and exterior camera parameters).
• virtual camera positions derived from otherwise obtained information (as described in Summary of the Invention).
• Scene analysis via audio positional information (if available for example when multiple audio feeds are captured).
Scene 3D Calibration
The purpose of this process is to infer a dynamic sample (per frame or any desired interval) of:
• camera position 3D coordinates for each of the available video feeds.
• camera lens information for each of the available video feeds.
• view direction vector for each of the available video feeds.
• positional audio data for each of the available audio feeds.
• determination of the Virtual Acoustic Environment of scene locale.
• global world scale coordinates of Scene (generally not dynamic). o this is realized by introducing (human or other) scale references assumptions based on:
knowledge of scene geometric invariants parts.
user determination (measurement).
human body tracking and recognition.
An alternative embodiment (described above) might add:
• full scene 3D reconstruction via augmented capture devices.
2. Data Representation:
Dynamic Representation of Scene 3D Data
In one possible embodiment this is an xml file that can be dynamically updated at the required intervals (frame -rate or otherwise)
• x y z coordinate of camera positions.
• direction vector.
• lens information.
• audio positioning.
• Virtual Acoustic Environment of scene locale.
• time coding information relative to audio and video.
• various formats of full scene 3D data representation.
3. Processing:
The data described above is processed via:
Scene Descriptor
This is the class that describes (2D 3D spatial layout and relations) the connection graph of the available vantage points of view. It also reads the Dynamic 4D Data (3D positioning plus Time Coding) information after it has been elaborated. The time-coded information (expressed in the appropriate format and intervals - e.g. frames or timecode or subsamples thereof) can be used to drive time altering actions by the users (or system - e.g. editing list) on the audio-video feeds. Scene Mapper
Determines the topology configurations of the vantage points of view and of their respective Virtual Acoustic Environment configurations with all their relational connections. This determines the geometric configuration of the simulated 3D space (plane, sphere, complex surface, volume etc..) and of the possible transitional paths among points of view and their relative listening positions.
4. User Input:
Gesture based choice of vantage point of view playback of audio-video content.
Gesture based choice of time altered playback of audio-video content.
Selectable by the user among a programmable set of gestural interface actions, such as swipes, touches or others.
5. Gesture Mapper:
User input is processed and the Gesture Mapper assigns a path (among those available as instructed by the Scene Mapper) for performing the necessary tridimensional transformations to be applied to the current point of view (and listening position) to transition it into the one chosen by the users (or system - e.g. editing list). User input can also be mapped to the time coded information actions allowed, for example time scaling (slow motion or fast forward with or without audio pitch alteration) etc...
6. Animation Interface and Output:
Animation transitional elements (audio and video) are assigned, triggered and rendered along with the appropriate audio-video feeds for the user (or system - e.g. editing list) desired points of view (and listening position) to the device appropriate output e.g. viewport (screen or 3D display) - speakers etc... [27] The objective of having a virtually unlimited feeds without compromise in respects to the continuity of fruition of audio and video is challenging. It becomes even more challenging if we attempt to realize it using devices that have limited hardware resources such the
aforementioned one in the preferred embodiment of the present invention.
[28] Nonetheless, for the purpose of creating the desired perceptual effect, it is sufficient to provide the user with the feeling of having a nearly unlimited number vantage points of view constantly available. This is obtained in fact via the dynamic management of only a few of them (point of view) at any given time through an efficient code base.
[29] So in the preferred embodiment of the present invention we are actually only managing two main view for the video feeds at any time (the minimum number that is necessary for animating transitions), and only a single audio which serves also as a basis for the time synchronization between all the available sources.
[30] The actual sources though are in fact available in a number greater than two, and they are switched in the player, at any given moment, via the extensive utilization of uninterrupted switchable streaming techniques (encapsulating sources inside an array, switching to the destination feed exclusively when a key-frame is available so to not generate artifacts, using a common shared timeline, etc.).
[31] In the described environment a user can, for example, interact with the player through a swipe or touch gesture. This allows her/him to freely switch among a great number of available video feeds where the transitions between subsequent choices are animated in the view-port in a planar fashion relative to the device screen space (for example being in a centered position a gesture of swipe right will produce a slide transition to a camera to the right) all of this happening while the show (audio-video) continues uninterrupted.
DETAILED DESCRIPTION
[32] The present invention overcomes the limitations of the prior. Methods and systems that implement the embodiments of the various features of the invention will now be described.
The descriptions are provided to illustrate embodiments of the invention and not to limit the scope of the invention. Reference in the specification to "one embodiment" or "an embodiment" is intended to indicate that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least an embodiment of the invention. The appearances of the phrase "in one embodiment" or "an embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
[33] As used in this disclosure, except where the context requires otherwise, the term
"comprise" and variations of the term, such as "comprising", "comprises" and "comprised" are not intended to exclude other additives, components, integers or steps.
[34] In the following description, specific details are given to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific detail. Well-known circuits, structures and techniques may not be shown in detail in order not to obscure the embodiments. For example, circuits may be shown in block diagrams in order not to obscure the embodiments in unnecessary detail.
[35] Also, it is noted that the embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.
[36] Moreover, a storage may represent one or more devices for storing data, including read-only memory (ROM), random access memory (RAM), magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine readable mediums for storing information. The term "machine readable medium" includes, but is not limited to portable or fixed storage devices, optical storage devices, wireless channels and various other mediums capable of storing, containing or carrying instruction(s) and/or data.
[37] Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium such as a storage medium or other storage(s). One or more than one processor may perform the necessary tasks in series, concurrently or in parallel. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or a combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted through a suitable means including memory sharing, message passing, token passing, network transmission, etc.
[38] The system and method will now be disclosed in detail. Preferred embodiments will now be described more fully. Embodiments, however, may be embodied in many different forms and should not be construed as being limited to the embodiment set forth herein. Rather, this preferred embodiment is provided so that this disclosure will be thorough and complete, and will fully convey the scope to those skilled in the art.
[39] As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
[40] It will be understood that although the terms first, second, third, etc., may be used herein to describe various elements, components, Classes or methods, these elements, components, Classes or methods should not be limited by these terms. These terms are only used to distinguish one elements, components, Classes or methods from another element, component, Class or method.
[41] For example, a first element, component, Class and/or method could be termed a second element, component, Class and/or method without departing from the teachings of example embodiments.
[42] Spatially relative methods, such as "-(void)animateFromRight," "-
(void)animateFromLeft", "-(void)animateFromTop", "-(void)animateFromBottom", and the like may be used herein for ease of description to describe the relationship of one method/Class and/or feature to another method/Class and/or feature, or other method(s)/Class(es) and/or feature(s).
[43] It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation.
[44] The terminology used herein is for the purpose of describing this preferred embodiment only and is not intended to be limiting. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, and/or components.
[45] Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which preferred embodiment belong. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and should not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[46] In this description of the preferred embodiment we are using iOS sdk 3.2 for an Apple iPad application, which is available for registered developers at the website of said company.
[47] For the single audio track we are using a singleton that is obtained using a macro header file ("SynthesizeSingleton.h") written by Matt Gallagher, which is available at the following website link: http://cocoawithlove.com/2008/l l/singletons-appdelegates-and-top- level.html.
[48] The "SingleAudio" class is driving the timeline to which all video feeds refer for synchronization, in the preferred embodiment the audio file is loaded into the "Main Bundle" of the App also in the case that the video files are streamed over a network (i.e. Internet) to assure that user listening experience is unaffected by communication failures, but if it is not a strict requirement the audio track could be equally loaded over the net.
[49] "SingleAudio" class header file is defined as follows: "SingleAudio.h" Code starts below:
//
// SingleAudio.h
// iPov3
//
// Created by Antonio Rossi on 02/01/11.
// Copyright 2011 Yoctle Limited Limited. All rights reserved.
//
#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreMedia/CoreMedia.h>
#import "SynthesizeSingleton.h"
©interface SingleAudio : NSObject {
AVPlayerltem *audioItem;
AVPlayer *audioPlayer;
CMTime *audioTime;}
©property (nonatomic, retain) AVPlayerltem *audioItem;
©property (nonatomic, retain) AVPlayer *audioPlayer;
// Class method to return an instance of GameControUer. This is needed as this // class is a singleton class + (SingleAudio *)sharedS ingle Audio;
-(void)play;
-(void)syncUI;
-(CMTime)currentTime;
©end
"SingleAudio.h" code has finished here.
[50] The implementation of "SingleAudio" class is descripted below.
[51] "SingleAudio" class implementation code starts here: //
// SingleAudio.m
// iPov3
//
// Created by Antonio Rossi on 02/01/11.
// Copyright 2011 Yoctle Limited Limited. All rights reserved.
//
#import "SingleAudio.h"
#import "CocosDenshion.h"
#import "SimpleAudioEngine.h"
©implementation Single Audio
@ synthesize audioltem;
@ synthesize audioPlayer;
static const NSString *ItemStatusContext;
// Make this class a singleton class
S YNTHES IZE_S INGLETON_FOR_CLAS S (S ingle Audio) ; -(id)init {
if ((self = [super init])) {
NSURL *audioFileURL = [[NSBundle mainBundle]
URLForResource: @ "audio" withExtension: @ "m4v"] ;
NSLog(@ "loaded audio.m4v");
AVURLAsset *audioAsset = [AVURLAsset URLAssetWithURL:audioFileURL options:nil];
self. audioltem = [AVPlayerltem playerItemWithAsset:audioAsset];
self. audioPlayer = [AVPlayer playerWithPlayer Item: audioltem];
[self .audioPlayer pause] ;
[[NSNotificationCenter defaultCenter] addObservenself selector: @ selector(playerItemDidReachEnd:)
name : A VPlayerltemDidPlayToEndTimeNotific ation obj ec t : audioltem] ; [self.audioltem addObservenself forKeyPath: @ " status" options:0 context:&ItemStatusContext]; }
return self; }
-(void)play {
[audioPlayer play] ;
NSLog(@ "audio playing"); }
-(CMTime)currentTime {
return self.audioItem.currentTime; }
- (void) ob serve ValueForKeyPath:(NS String *)keyPath ofObject:(id)object change: (NS Dictionary *)change context: (void *)context {
if (context == &ItemStatusContext) {
[self syncUI];
return; }
[super observeValueForKeyPath:keyPath ofObject:object change:change context:context];
return; }
- (void)syncUI {
if ((audioPlayer.currentltem != nil) &&
( [audioPlayer. currentltem status] == AVPlayerltemStatusReadyToPlay)) { } else { } }
- (void)playerItemDidReachEnd:(NSNotification *)notification {
// bring again the show at the beginning and notify povs
[audioPlayer seekToTime:kCMTimeZero] ; [[NSNotificationCenter defaultCenter]
postNotificationName: @ "showDidReachEnd" object:self] ;
NSLog(@ "show did reach end, sent notification");}
@end
"SingleAudio" class implementation code finished.
[52] The format and requirements of the content is managed by a class
"SceneDescriptor", in this implementation it has hard coded source files for loading the required video sources and all the basic graphic elements such as the thumbnails.
[53] "SceneDescriptor" header file is as follows:
[54] "SceneDescriptor" header code starts below:
//
// SceneDescriptor.h
// iPov3
//
// Created by Antonio Rossi on 01/01/11.
// Copyright 2011 Yoctle Limited Limited. All rights reserved.
//
#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
©interface SceneDescriptor : NSObject {
NSArray *sourcesFileNames;
NSString *sourcesFileType;
NSArray *thumbnailsFileNames;
NSString *thumbnailsFileType;
NS Mutable Array *stageFeedDistributor;
NS Mutable Array *stageThumbnailsStreamsReaders;
AVPlayerltem *sourcePlayerItem;
int numberOfVideoFeeds; int initialVideoFeed; }
©property (nonatomic, retain) NS Mutable Array * stageFeedDistributor;
©property (nonatomic, retain) NS Mutable Array *stageThumbnailsStreamsReaders;
©property (nonatomic, retain) AVPlayerltem *sourcePlayerItem;
©property int numberOfVideoFeeds;
©property int initialVideoFeed;
- (id)initWithStageFiles ;
©end
"SceneDescriptor" header code finished.
[55] "SceneDescriptor" implementation code is as follows:
[56] "SceneDescriptor" implementation code starts below:
#import "SceneDescriptor.h"
#import "Global.h"
©implementation SceneDescriptor
© synthesize stageFeedDistributor, stageThumbnailsStreamsReaders;
©synthesize sourcePlayerltem;
©synthesize numberOfVideoFeeds;
© synthesize initialVideoFeed;
-(id)init {
return [self initWithStageFiles];}
-(id)initWithStageFiles {
if (self = [super init]) {
// create the array with stage filenames
sourcesFileNames = [NSArray arrayWithObjects: @ "sx", © "hi", © "my", @ "ph", @ "dx", nil];
sourcesFileType = @ "m4v";
thumbnailsFileNames = [NSArray arrayWithObjects: @ "sx-thumb", © "hithumb", @ "my-thumb", @ "ph-thumb", @ "dx-thumb", nil];
thumbnailsFileType = @ "mov";
NSLog(@ "loading stage...");
// init the array of sources
stageFeedDistributor = [[NS Mutable Array alloc]
initWithCapacity: [sourcesFileNames count] ] ;
stageThumbnailsStreamsReaders = [[NS Mutable Array alloc]
initWithCapacity: [thumbnailsFileNames count] ] ;
// init properties
numberOfVideoFeeds = [sourcesFileNames count];
NSLog(@ "here we have d sources", numberOfVideoFeeds);
initialVideoFeed = INITI AL_VIDEO_FEED ;
NSLog(@ "initial point of view of this stage: d",
initialVideoFeed) ; }
return self;}
- (NS Mutable Array * ) s tageThumbnailsS treamsReader s {
for (int i = 0; i < numberOfVideoFeeds; i++) {
// set the thumbnails array of assets
NSURL *thumbnailFileURL = [[NSBundle mainBundle]
URLForResource: [thumbnailsFileNames objectAtlndexd]
withExtension:thumbnailsFileType];
AVURLAsset *thumbnailAsset = [[AVURLAsset alloc]
initWithURL:thumbnailFileURL
options: [NSDictionary dictionaryWithObject: [NSNumber
numberWithBoohNO] forKey:AVURLAssetPreferPreciseDurationAndTimingKey]];
[stageThumbnailsStreamsReaders addObject:thumbnailAsset] ;
//[thumbnailFileURL release] ; }
return stageThumbnailsStreamsReaders; } -(NSMutableArray*)stageFeedDistributor {
for (int i = 0; i < numberOfVideoFeeds; i++) {
// set the sources array of playerltems
NSLog(@ "loading %i files of type @ ", [sourcesFileNames count], sourcesFileType);
NSURL * sourceFileURL = [[NSBundle mainBundle]
URLForResource: [sourcesFileNames objectAtlndexd]
withExtension: sourcesFileType] ;
AVURLAsset *sourceAsset = [[AVURLAsset alloc]
initWithURL: sourceFileURL
options: [NSDictionary dictionaryWithObject: [NSNumber numberWithBoohYES]
forKey:AVURLAssetPreferPreciseDurationAndTimingKey]];
self.sourcePlayerltem = [AVPlayerltem
playerltemWithAs set: sourceAs set] ;
[stageFeedDistributor addObject:sourcePlayerItem] ;
[sourceAsset release] ;
//[sourceFileURL release]; }
return stageFeedDistributor; }
-(void) dealloc {
NSLog(@ "deallocating stage initializer");
[stageFeedDistributor dealloc];
[stageThumbnailsStreamsReaders dealloc] ;
NSLog(@ "deallocating playerltems initializer");
[super dealloc];}
@end
"SceneDescriptor" implementation code finished. [57] A "UserSessionManager" class coordinates relationships between user gestures and devices status and orientation, it is receiving inputs from the GUI (graphical user interface), it alternates two objects of a "StreamProducer" class in order to manage the presentation on the screen device of two alternating "StreamConsumer" objects, one user visible video at a given time and another one operating not visible in background that is provided for animating transitions, this last one swapping its role with the first at the end of a transition.
[58] The alternation between the two "StreamProducer" objects realize the effect of unlimited switchable video feeds presented on the screen device, which are selectable by gestures given by the user, such as swipes or touches on the screen device.
[59] The header file of "UserSessionManager" class is shown below.
[60] "UserSessionManager" header file code starts here:
//
// UserSessionManager.h
// iPov3
//
// Created by Antonio Rossi on 07/01/11.
// Copyright 2011 Yoctle Limited Limited. All rights reserved.
//
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#import <iAd/iAd.h>
©class StreamProducer;
©class Single Audio;
© class PlayerControlsViewController;
©class SplashViewController;
©class Banner Vie wController;
©class InfoViewController;
©interface UserSessionManager : UlViewController <ADBannerViewDelegate, Ur>VebViewDelegate> { StreamProducer *firstStreamProducer;
StreamProducer *secondStreamProducer;
Splash ViewController * splash ViewController;
Banner Vie wController *banner ViewController;
InfoViewController *infoViewController;
Single Audio *sharedS ingle Audio;
BOOL isOtherPlayer;
BOOL okToSwitch;
BOOL playerControlsShown;
BOOL hidePlayerControlsWithAnimation;
BOOL switchHasBeenReset;
int newPointOfView;
PlayerControlsViewControUer *playerControls ;
UIDeviceOrientation lastOrientation; }
©property (nonatomic, retain) IBOutlet StreamProducer *firstStreamProducer;
©property (nonatomic, retain) IBOutlet StreamProducer *secondStreamProducer;
©property (nonatomic, retain) IBOutlet PlayerControlsViewControUer
*playerControls;
-(void)switchFeed;
-(void)loadStage;
- (void)as signFeed;
-(void)swipeCanBeCanceled;
-(void)pauseShow;
-(void)resumeShow;
-(void)returnToShow;
©end
"UserSessionManager" header file code finished. [61] "UserSessionManager" class has a method defined as "-(void)assignFeed"; this is an algorithm that maps navigation rules between available feeds and user interaction via gestures.
[62] Suppose we have a certain number of feeds referring to multiple view of a given event, we can establish that the a user viewing a central camera and swiping to the right will go to a right camera, and back if swiping to the opposite direction. Method "-(void)switchFeed" implements the logic for the two alternating "StreamProducer" class objects, of which only one is presented to the user at a given time, being the other always available for the animating transition as said before.
[63] "UserSessionManager" class has two methods named "-(void)showPlayerControls" and "-(void)hidePlayerControls:(NSTimer *)theTimer", which are managing the presentation on screen device of a GUI in which thumbnails of the available feeds can be shown, providing the user other interface elements with which he can interacts.
[64] As said before, in "UserSessionManager" the video feed are constantly synchronized to the timeline of the singleton "SingleAudio" class, for example in the switching phase or in the show management methods.
[65] "UserSessionManager" class further has other methods related to "show
management", the timing of the presentation of elements and the acquisition of user gestures, the managing of ads presented on screen, the managing of various phases of animating transition.
[66] "UserSessionManager" class implementation code is as follows.
[67] "UserSessionManager" implementation code start below.
//
// UserSessionManager.m
// iPov3
//
// Created by Antonio Rossi on 07/01/11.
// Copyright 2011 Yoctle Limited Limited. All rights reserved.
//
#import "UserSessionManager.h"
#import "Global.h" #import "StreamProducer.h"
#import "SingleAudio.h"
#import "PlayerControlsViewController.h"
#import "Splash ViewController.h"
#import "Banner ViewController.h"
#import "InfoViewController.h"
©interface UserSessionManager (PrivateMethods)
-(void)startShow;
-(void)rewindShow;
-(void)forwardShow;
-(void)exitShow;
-(void)syncSources;
-(void)hidePlayerControls:(NSTimer*)theTimer;
- (void)invalidatePlayer : (S treamProducer * ) aPlayerViewControUer ; -(void)animateToFirstPlayer:(id)nilObject;
-(void)animateToSecondPlayer:(id)nilObject;
- (void)thumbZero ;
-(void)thumbOne;
- (void)thumbTwo ;
- (void)thumbThree ;
- (void)thumbFour ;
- (void) s witchFeedFromThumbnail ;
- (void)showPlayerControls ;
-(void)splash;
©end
©implementation UserSessionManager
©synthesize firsts treamProducer;
© synthesize seconds treamProducer;
©synthesize playerControls; // Implement viewDidLoad to do additional setup after loading the view, typically from a nib.
- (void)viewDidLoad {
[super viewDidLoad];
isOtherPlayer = YES;
playerControlsShown = NO;
sharedS ingle Audio = [SingleAudio sharedSingleAudio];
// Register to receive notifications
[[NSNotificationCenter defaultCenter] addObservenself
selector: @ selectors witchFeed) name: @ "switchCamera" object:nil] ;
[[NSNotificationCenter defaultCenter] addObservenself
selector: @ selector( showDidReachEnd) name: @ "showDidReachEnd" object:nil] ;
[[NSNotificationCenter defaultCenter] addObservenself
selector: @ selector(showPlayerControls) name: @ "userWantsPlayerControls" object:nil];
[[NSNotificationCenter defaultCenter] addObserver:self
selector: @ selector(pauseShow) name: @ "pauseButton" object:nil] ;
[[NSNotificationCenter defaultCenter] addObserver:self
selector: @ selector(resumeShow) name: @ "playButton" object:nil] ;
[[NSNotificationCenter defaultCenter] addObservenself
selector: @ selector(rewindShow) name: @ "rewindButton" object:nil] ;
[[NSNotificationCenter defaultCenter] addObservenself
selector: @ selector(showContentlnfo) name: @ "contentlnfoButton" object:nil] ;
[[NSNotificationCenter defaultCenter] addObservenself
selector: @ selector(showDemoInfo) name: @ "demoInfoButton" object:nil] ;
[[NSNotificationCenter defaultCenter] addObservenself
selector: @ selector(thumbZero) name: @ "thumbZero" objec nil];
[[NSNotificationCenter defaultCenter] addObserver:self selector: @ selector(thumbOne) name: @ "thumbOne" object:nil];
[[NSNotificationCenter defaultCenter] addObservenself
selector: @ selector(thumbTwo) name: @ "thumbTwo" object:nil];
[[NSNotificationCenter defaultCenter] addObservenself
selector: @ selector( thumbThree) name: @ "thumbThree" object:nil] ; [[NSNotificationCenter defaultCenter] addObservenself
selector: @ selector(thumbFour) name: @ "thumbFour" object:nil];
[[NSNotificationCenter defaultCenter] addObservenself selector: @ selector(returnToShow) name: @ "returnToShow" object:nil] ; [[NSNotificationCenter defaultCenter] addObserver:self
selector: @ selector(orientationChanged:)
name: @ "UIDeviceOrientationDidChangeNotification" object:nil] ; [self loadStage];}
- (void)loadStage {
// create the two alternating view with player items
NSLog(@ "allocating a firstPlayerViewController");
firsts treamProducer = [[StreamProducer alloc] initWithNibName:nil bundle: nil];
[self .view addSubview:firstStreamProducer.view] ;
[firsts treamProducer. StreamProducer pause] ;
NSLog(@ "first player allocated and having point of view: d", firsts treamProducer . current VideoFeed) ;
NSLog(@ "allocating a secondPlayerViewController");
seconds treamProducer = [[StreamProducer alloc] initWithNibName:nil bundle: nil];
[self .view addSubview: seconds treamProducer. view] ;
[seconds treamProducer. StreamProducer pause] ; NSLog(@ "second player allocated and having point of view: d", seconds treamProducer.currentVideoFeed);
// create the view for control player
NSLog(@ "allocate and init a playerControl view controller");
playerControls = [ [PlayerControls ViewController alloc]
initWithNibName: @ "PlayerControls" bundle:nil] ;
[self .view addSubview:playerControls.view] ;
if (self.interfaceOrientation == UllnterfaceOrientationPortrait II self.interfaceOrientation == UIInterfaceOrientationPortraitUpsideDown) {
[playerControls setControlsOnScreenPortrait] ; }
[[playerControls view] setHidden:YES] ;
[playerControls loadStage] ;
[playerControls showChoosenPointOfYiew:INITIAL_VIDEO_FEED] ; // create the view for the info
info ViewController = [[Info ViewController alloc] initWithNibName: nil bundle: nil] ;
infoViewController.wantsFullScreenLayout = YES;
[self .view addSubviewdnfoViewController.view] ;
[[info ViewController view] setHidden:YES] ;
// Load the banner
NSLog(@ "creating the banner of iAD");
bannerViewController = [[BannerViewController alloc]
initWithNibName: nil bundle:nil main ViewController: self] ;
[self .view addSubview:bannerViewController.view] ;
[self splash] ; }
#pragma mark -
#pragma mark show management -(void)splash {
//Place splash onScreen
splashViewController = [[SplashViewController alloc]
initWithNibName:nil bundle: nil];
splashViewController. wantsFullScreenLayout = YES;
[self .view addSubview: splashViewController. view] ;
[self startShow];
[self performSelector: @selector(splashRemove) withObject:self afterDelay:4];}
-(void)splashRemove {
CATransition *animation;
animation = [CATransition animation] ;
[animation setDuration:4];
[animation setType:kCATransitionFade] ;
[animation setTimingFunction: [CAMediaTimingFunction
functionWithName:kCAMediaTimingFunctionEaseInEaseOut] ] ;
[animation setDelegate:self];
[[self. view layer] addAnimation: animation forKey: @ "splashHiding"]; NSLog(@ "playerControls has been hided");
[[splashViewController view] removeFromSuperview] ;
[splashViewController autorelease] ; }
-(void)startShow {
[sharedS ingle Audio. audioPlayer seekToTime:kCMTimeZero] ;
[firsts treamProducer.streamProducer seekToTime:kCMTimeZero] ;
[seconds treamProducer.streamProducer seekToTime:kCMTimeZero] ;
[sharedS ingle Audio play];
[firsts treamProducer.streamProducer play] ; [seconds treamProducer.streamProducer play] ;
NSLog(@ "show starting from the beginning");}
-(void)pauseShow {
[sharedSingleAudio. audioPlayer pause] ;
[firsts treamProducer.streamProducer pause] ;
[seconds treamProducer.streamProducer pause] ; }
-(void)resumeShow {
[self syncSources];
[sharedSingleAudio. audioPlayer play] ;
[firsts treamProducer.streamProducer play] ;
[seconds treamProducer.streamProducer play] ; }
-(void)rewindShow {
[[[SingleAudio sharedSingleAudio] audioPlayer] seekToTime:kCMTimeZero]; [firsts treamProducer.streamProducer seekToTime:kCMTimeZero] ;
[seconds treamProducer.streamProducer seekToTime:kCMTimeZero] ; }
-(void)showContentlnfo {
[infoViewControUer showPageWithURL: @ "http://www.swipeplayer.com Puccini-Le_Villi-promo/credits .htm" requestedBy: self] ; }
-(void)showDemoInfo {
[infoViewControUer showPageWithURL: @ "http://www.swipeplayer.com SwipePlayer-demo_info/iPO V_technology.htm" requestedBy: self] ; }
-(void)returnToShow {
CATransition *animation; animation = [CATransition animation] ;
[animation setDuration: l];
[animation setType:kCATransitionFade] ;
[animation setTimingFunction: [CAMediaTimingFunction
functionWithName:kCAMediaTimingFunctionEaseInEaseOut] ] ;
[animation setDelegate:self];
[[bannerViewController banner] setHidden:NO];
[[infoViewControUer view] setHidden:YES];
[playerControls infoDismissed];
[[self. view layer] addAnimation: animation
for Key: @ "userDismissedDemoInfo"] ;
[self resumeShow] ; }
-(void)showDidReachEnd {
[self pauseShow];
[infoViewControUer showPageWithURL: @ "http://www.swipeplayer.com SwipePlayer-demo_info/iPO V_technology.htm" requestedBy: self] ; [self rewindShow] ; }
-(void)syncSources {
CMTime currentTime = [sharedS ingle Audio currentTime] ;
[firsts treamProducer.streamProducer seekToTime:currentTime] ;
[seconds treamProducer.streamProducer seekToTime: currentTime] ; }
-(void)showPlayerControls {
if (!playerControlsShown && firstStreamProducer.gotSwipe == NO && secondStreamProducer.gotSwipe == NO) {
[NSObject cancelPreviousPerformRequestsWithTarget:self selector: @ selector(hidePlayerControls:) object:nil] ;
[playerControls setShownOnScreen: YES] ;
CATransition *animation;
animation = [CATransition animation] ;
[animation setDuration:0.5];
[animation setType:kCATransitionFade] ;
[animation setTimingFunction: [CAMediaTimingFunction
functionWithName:kCAMediaTimingFunctionEaseInEaseOut] ] ;
[animation setDelegate:self];
[[self. view layer] addAnimation: animation
forKey: @ "playerControlsShowing"] ;
[playerControls. view setHidden:NO] ;
playerControlsShown = YES;
[playerControls setShownOnScreen: YES] ;
NSLog(@ "playerControls positioning had been defined and we returned to main view controller");
hidePlayerControlsWithAnimation = YES;
[self performs elector: @ selector(hidePlayerControls:)
withObject:nil afterDelay: 10];
NSLog(@ "controls will hide in ten seconds");}
else {
[NSObject cancelPreviousPerformRequestsWithTarget:self selector: @ selector(hidePlayerControls:) object:nil] ;
NSLog(@ "got a touch with playerControls shown, this mean we have to hide them");
hidePlayerControlsWithAnimation = NO;
[self hidePlayerControls:nil] ; } }
-(void)hidePlayerControls:(NSTimer *)theTimer { if (lastOrientation == UIDeviceOrientationLandscapeLeft II lastOrientation == UIDeviceOrientationLandscapeRight) { if (hidePlayerControlsWithAnimation) {
[NSObject cancelPreviousPerformRequestsWithTarget:self selector: @ selector(hidePlayerControls:) object:nil] ;
NSLog(@ "checking if controls are onscreen");
[playerControls.view setHidden: YES] ;
playerControlsShown = NO;
[playerControls setShownOnScreen:NO] ;
CATransition *animation;
animation = [CATransition animation] ;
[animation setDuration:0.5];
[animation setType:kCATransitionFade] ;
[animation setTimingFunction: [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut] ] ;
[animation setDelegate:self];
[[self. view layer] addAnimation: animation
for Key: @ "playerControlsHiding"] ;
NSLog(@ "playerControls has been hided");}
else if (playerControlsShown) {
[NSObject cancelPreviousPerformRequestsWithTarget:self selector: @ selector(hidePlayerControls:) object:nil] ;
NSLog(@ "hideing playerControls for a swipe request");
[playerControls.view setHidden: YES] ;
[playerControls setShownOnScreen:NO] ;
playerControlsShown = NO; } } }
#pragma mark - #pragma mark switching -(void)switchFeed {
firsts treamProducer. gotS wipe = YES;
seconds treamProducer. gotS wipe = YES;
NSLog(@ "isOtherPlayer ??: hu", isOtherPlayer);
if (isOtherPlayer == NO) {
NSLog(@ "firstPlayer currentPointOfView: d - nextPointOfView: d" , firstStreamProducer.currentVideoFeed,
firsts treamProducer. new VideoFeed);
NSLog(@ "secondPlayer currentPointOfView: d - nextPointOfView: d", seconds treamProducer.current VideoFeed,
seconds treamProducer.new VideoFeed);
[self assignFeed];
if (okToSwitch) {
NSLog(@ "removing firstPlayerView having point of
view: d", firstStreamProducer.currentVideoFeed);
NSLog(@ "cehck if we got a double swipe");
if (secondStreamProducer.currentVideoFeed !=newPointOfView) {
secondStreamProducer.current VideoFeed = newPointOfView;
[ [ seconds treamProducer streamProducer]
replaceCurrentltemWithPlayerltem: [seconds treamProducer
getFeed: seconds treamProducer.current VideoFeed] ] ;
[ [ seconds treamProducer streamProducer]
seekToTime: [sharedS ingle Audio currentTime] ] ; }
[self .view exchangeSubviewAtlndex: 1
withSubviewAtlndex : 0] ;
[[firsts treamProducer animation] setDelegate:self];
[[self. view layer]
addAnimation: firsts treamProducer. animation forKey: @ "userSwitchedCamera"] ;
NSLog(@ "secondPlayer on top, having point of view: d" , seconds treamProducer. current VideoFeed) ;
NSLog(@ "firstPlayer having point of view d will be
set to the same above", firstStreamProducer.currentVideoFeed);} } else {
NSLog(@ "entering in the secondPlayerViewControUer switch loop");
NSLog(@ "firstPlayer currentPointOfView: d - nextPointOfView: d" , firstStreamProducer.current VideoFeed, firsts treamProducer.new VideoFeed);
NSLog(@ "secondPlayer currentPointOfView: d - nextPointOfView: d", seconds treamProducer.current VideoFeed, seconds treamProducer.new VideoFeed);
[self assignFeed];
if (okToSwitch) {
NSLog(@ "removing secondPlayerView having poing of view: d", secondStreamProducer.currentVideoFeed);
NSLog(@ "got the needed animaiton, exchanging
sub views");
NSLog(@ "check if we got double swipe");
if (firstStreamProducer.currentVideoFeed !=
newPointOfView) {
firsts treamProducer . current VideoFeed =
newPointOfV iew ;
[ [firsts treamProducer streamProducer]
replaceCurrentltemWithPlayerltem: [firsts treamProducer getFeed:firstS treamProducer.current VideoFeed] ] ;
[ [firsts treamProducer streamProducer]
seekToTime: [sharedS ingle Audio currentTime] ] ; }
[self .view exchangeSubviewAtlndex: 1 withSubviewAtlndex : 0] ;
[[secondStreamProducer animation] setDelegate:self];
[[self. view layer]
addAnimation:secondStreamProducer.animation forKey: @ "userSwitchedCamera"] ;
NSLog(@ "subview 0 has replaced subview 1");
NSLog(@ "firstPlayer on top, having point of view: d",
firsts treamProducer. current VideoFeed) ;
NSLog(@ "secondPlayer having point of view d will be
set to the same above", secondStreamProducer.currentVideoFeed);} }
// if we did the switch we need to change player
if (okToSwitch == YES) {
NSLog(@ "we did a switch swappig between players");
isOtherPlayer = !isOtherPlayer;
// and update the thumbnails logo
[playerControls showChoosenPointOfView:newPointOfView] ; } }
-(void)assignFeed {
if (isOtherPlayer == NO) {
newPointOfView = firstStreamProducer.currentVideoFeed;
NSLog(@ "evaluating firstPlayerViewControUer camera to assign a
new camera to secondPlayer ViewController");
NSLog(@ "user wants camera switch: d",
firsts treamProducer .userWantsCameraS witch);
switch (firstStreamProducer.currentVideoFeed) {
case 0: //dx
NSLog(@ "we are at dx point of view in the firstPlayerViewControUer");
switch (firstStreamProducer.userWantsCameraS witch) {
case UserWantsCameraS witchUp:
okToS witch = YES; newPointOfView = 3;
break;
case UserWantsCameraSwitchRight:
okToS witch = YES;
newPointOfView = 4;
break;
case UserWantsCameraSwitchLeft:
okToS witch = YES;
newPointOfView = 2;
break;
case UserWantsCameraSwitchDown:
NSLog(@ "user wants switch up");
okToS witch = YES;
newPointOfView = 1;
break;
default:
break; }
break;
case 1 : //hi
NSLog(@ "we are at hy point of view in the
firstPlayerViewController " ) ;
switch (firstStreamProducer.userWantsCameraS witch) { case UserWantsCameraSwitchUp:
NSLog(@ "user wants switch up");
okToS witch = YES;
newPointOfView = 2;
break;
case UserWantsCameraSwitchRight:
NSLog(@ "user wants switch right"); okToS witch = YES;
newPointOfView = 0;
break;
case UserWantsCameraSwitchDown:
NSLog(@ "user wants switch down");
okToS witch = YES;
newPointOfView = 3;
break;
case UserWantsCameraSwitchLeft:
NSLog(@ "user wants switch left");
okToS witch = YES;
newPointOfView = 4;
break;
default:
break; }
break;
case 2: //my
NSLog(@ "we are at my point of view in the
firstPlayerViewController " ) ;
switch (firstStreamProducer.userWantsCameraS witch) { case UserWantsCameraSwitchUp:
NSLog(@ "user wants switch up");
okToS witch = YES;
newPointOfView = 3;
break;
case UserWantsCameraSwitchRight:
NSLog(@ "user wants switch right");
okToS witch = YES;
newPointOfView = 0; break;
case UserWantsCameraSwitchDown:
NSLog(@ "user wants switch up");
okToS witch = YES;
newPointOfView = 1;
break;
case UserWantsCameraSwitchLeft:
NSLog(@ "user wants switch left");
okToS witch = YES;
newPointOfView = 4;
break;
default:
break; }
break;
case 3: //ph
NSLog(@ "we are at ph point of view in the
firstPlayerViewController " ) ;
switch (firstStreamProducer.userWantsCameraS witch) { case UserWantsCameraSwitchUp:
okToS witch = YES;
newPointOfView = 1;
break;
case UserWantsCameraSwitchRight:
okToS witch = YES;
newPointOfView = 0;
break;
case UserWantsCameraSwitchDown:
okToS witch = YES;
newPointOfView = 2; break;
case UserWantsCameraSwitchLeft:
okToS witch = YES;
newPointOfView = 4;
break;
default:
break; }
break;
case 4: //sx
NSLog(@ "we are at sx point of view in the
firstPlayerViewController " ) ;
switch (firstStreamProducer.userWantsCameraS witch) { case UserWantsCameraSwitchUp:
okToS witch = YES;
newPointOfView = 3;
break;
case UserWantsCameraSwitchRight:
okToS witch = YES;
newPointOfView = 2;
break;
case UserWantsCameraSwitchDown:
NSLog(@ "user wants switch up");
okToS witch = YES;
newPointOfView = 1;
break;
case UserWantsCameraSwitchLeft:
okToS witch = YES;
newPointOfView = 0;
break; default:
break; }
break;
default:
break; }
if (okToSwitch == YES) {
NSLog(@ "ok to switch to the second player, hiding
controls");
hidePlayerControlsWithAnimation = NO;
[self hidePlayerControls:nil] ;
NSLog(@ "controls has been hided");} }
else {
newPointOfView = secondStreamProducer.currentVideoFeed;
NSLog(@ "evaluating secondPlayerViewControUer camera to assign a new camera to firstPlayerViewController");
NSLog(@ "user wants camera switch: d",
seconds treamProducer.userWantsCameraS witch);
switch (secondStreamProducer.currentVideoFeed) {
case 0: //dx
NSLog(@ "we are at dx point of view in the
secondPlayerViewControUer" ) ;
switch (secondStreamProducer.userWantsCameraS witch) { case UserWantsCameraSwitchUp:
okToS witch = YES;
newPointOfView = 3;
break;
case UserWantsCameraSwitchRight:
okToS witch = YES;
newPointOfView = 4; break;
case UserWantsCameraSwitchLeft:
okToS witch = YES;
newPointOfView = 2;
break;
case UserWantsCameraSwitchDown:
NSLog(@ "user wants switch up");
okToS witch = YES;
newPointOfView = 1;
break;
default:
if (okToSwitch) {
NSLog(@ "is ok to switch");}
break; }
break;
case 1 : //hi
NSLog(@ "we are at hi point of view in the
secondPlayerViewController" ) ;
switch (secondStreamProducer.userWantsCameraS witch) { case UserWantsCameraSwitchUp:
NSLog(@ "user wants switch up");
okToS witch = YES;
newPointOfView = 2;
break;
case UserWantsCameraSwitchRight:
NSLog(@ "user wants switch right");
okToS witch = YES;
newPointOfView = 0;
break; case UserWantsCameraSwitchDown:
NSLog(@ "user wants switch down");
okToS witch = YES;
newPointOfView = 3;
break;
case UserWantsCameraSwitchLeft:
NSLog(@ "user wants switch left");
okToS witch = YES;
newPointOfView = 4;
break;
default:
break; }
break;
case 2: //my
NSLog(@ "we are at my point of view in the
secondPlayerViewController" ) ;
switch (secondStreamProducer.userWantsCameraS witch) { case UserWantsCameraSwitchUp:
NSLog(@ "user wants switch up");
okToS witch = YES;
newPointOfView = 3;
break;
case UserWantsCameraSwitchRight:
NSLog(@ "user wants switch right");
okToS witch = YES;
newPointOfView = 0;
break;
case UserWantsCameraSwitchDown:
NSLog(@ "user wants switch up"); okToS witch = YES;
newPointOfView = 1;
break;
case UserWantsCameraSwitchLeft:
NSLog(@ "user wants switch left");
okToS witch = YES;
newPointOfView = 4;
break;
default:
break; }
break;
case 3: //ph
NSLog(@ "we are at ph point of view in the
secondPlayerViewController" ) ;
switch (secondStreamProducer.userWantsCameraS witch) { case UserWantsCameraSwitchUp:
okToS witch = YES;
newPointOfView = 1;
break;
case UserWantsCameraSwitchRight:
okToS witch = YES;
newPointOfView = 0;
break;
case UserWantsCameraSwitchDown:
okToS witch = YES;
newPointOfView = 2;
break;
case UserWantsCameraSwitchLeft:
okToS witch = YES; newPointOfView = 4;
break;
default:
break; }
break;
case 4: //sx
NSLog(@ "we are at sx point of view in the
secondPlayerViewController" ) ;
switch (secondStreamProducer.userWantsCameraS witch) { case UserWantsCameraSwitchUp:
okToS witch = YES;
newPointOfView = 3;
break;
case UserWantsCameraSwitchRight:
okToS witch = YES;
newPointOfView = 2;
break;
case UserWantsCameraSwitchDown:
NSLog(@ "user wants switch up");
okToS witch = YES;
newPointOfView = 1;
break;
case UserWantsCameraSwitchLeft:
okToS witch = YES;
newPointOfView = 0;
break;
default:
break; }
break; default:
if (okToSwitch) {
NSLog(@ "is ok to switch");}
break; }
NSLog(@ "user interaction evaluated in
secondPlayerViewController" ) ;
if (okToSwitch == YES) {
hidePlayerControlsWithAnimation = NO;
[self hidePlayerControls:nil] ;
NSLog(@ "switch of camera possibile: changing to poing of view d in the firstPlayerViewController", newPointOfView);} } }
-(void)switchFeedFromThumbnail {
if (isOtherPlayer == NO) {
NSLog(@ "first player on stage, calculating how to fade to second player");
// check if second player has the same destination feed
if ( ! (seconds treamProducer. current VideoFeed == newPointOfView)) {
NSLog(@ "we need a switch in the second player");
// user want a different point of view in the second player
secondStreamProducer.current VideoFeed = newPointOfView;
[ [ seconds treamProducer streamProducer]
replaceCurrentltemWithPlayerltem: [seconds treamProducer getFeed: newPointOfV ie w]
] ;
[ [ seconds treamProducer streamProducer]
seekToTime: [sharedS ingle Audio currentTime] ] ;
CATransition *animation;
animation = [CATransition animation] ; [animation setDuration:0.5];
[animation setType:kCATransitionFade] ;
[animation setTimingFunction: [CAMediaTimingFunction
functionWithName:kCAMediaTimingFunctionEaseInEaseOut] ] ;
[animation setDelegate:self];
[self. view exchangeSubviewAtlndex: 1 withSubviewAtIndex:0];
[[self. view layer] addAnimation: animation
for Key: @ "userTouchedThumbnails"] ;
[playerControls showChoosenPointOfView:newPointOfView] ;
switchHasBeenReset = NO;}
else {
// we don't need to switch feed
switchHasBeenReset = YES;} }
else if (isOtherPlayer == YES) {
NSLog(@ "second player on stage, calculating how to fade to first
player");
// check if first player has the same destination feed
if (! (firsts treamProducer.currentVideoFeed == newPointOfView)) {
// user want a different point of view in the first player
firstStreamProducer.currentVideoFeed = newPointOfView;
[ [firsts treamProducer streamProducer]
replaceCurrentltemWithPlayerltem: [firstStreamProducer getFeed:newPointOfView]
] ;
[ [firsts treamProducer streamProducer]
seekToTime: [sharedS ingle Audio currentTime] ] ;
CATransition *animation;
animation = [CATransition animation] ;
[animation setDuration:0.5];
[animation setType:kCATransitionFade] ; [animation setTimingFunction: [CAMediaTimingFunction
functionWithName:kCAMediaTimingFunctionEaseInEaseOut] ] ;
[animation setDelegate:self];
[self. view exchangeSubviewAtIndex:0 withSubviewAtIndex: l];
[[self. view layer] addAnimation: animation
for Key: @ "userTouchedThumbnails"] ;
switchHasBeenReset = NO;
[playerControls showChoosenPointOfView:newPointOfView] ; } else {
// we don't need to switch feed
switchHasBeenReset = YES;} } if (! switchHasBeenReset) {
isOtherPlayer = !isOtherPlayer;}
if ((lastOrientation == UIDeviceOrientationLandscapeLeft II lastOrientation == UIDeviceOrientationLandscapeRight) && playerControls. shownOnScreen == YES) {
NSLog(@ "we received a touch on a thumbnail meanwhile orientation is landscape");
[NSObject cancelPreviousPerformRequestsWithTarget:self
selector: @ selector(hidePlayerControls:) object:nil] ;
[self performs elector: @ selector(hidePlayerControls:)
withObject:nil afterDelay:5] ; } }
#pragma mark -
#pragma mark timed methods
-(void)animationDidStop:(CAAnimation *)anim finished: (BOOL)flag { if (isOtherPlayer == NO) {
[self invalidatePlayer: seconds treamProducer] ; } else { [self invalidatePlayer:firstStreamProducer];}
[self swipeCanBeCanceled] ; }
-(void)swipeCanBeCanceled {
firstStreamProducer.gotSwipe = NO;
seconds treamProducer.gotS wipe = NO;}
- (void)invalidatePlayer : (S treamProducer * ) aPointOfV iewController { [[aPointOfViewController streamProducer]
replaceCurrentItemWithPlayerItem:[aPointOfViewController getFeed:newPointOfView] ] ;
aPointOfViewController.currentVideoFeed = newPointOfView; NSLog(@ "previous camera has been invalidated");}
#pragma mark -
#pragma mark touch events
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@ "user sent touch in B iPovViewControUer");
[self switchFeed] ; }
-(void)thumbZero {
newPointOfView = 0;
[self switchFeedFromThumbnail] ; }
-(void)thumbOne {
newPointOfView = 1;
[self switchFeedFromThumbnail];}
-(void)thumbTwo { newPointOfView = 2;
[self switchFeedFromThumbnail] ; }
-(void)thumbThree {
newPointOfView = 3;
[self switchFeedFromThumbnail];}
-(void)thumbFour {
newPointOfView = 4;
[self switchFeedFromThumbnail];}
#pragma mark -
#pragma mark ADBannerViewDelegate methods
-(void)bannerViewDidLoadAd:(ADBannerView *)banner{
[bannerViewControUer layoutForCurrentOrientation: YES] ; }
-(void)bannerView: (ADBannerView *)banner
didFailToReceiveAdWithError:(NSError*)error{
[bannerViewControUer layoutForCurrentOrientation: YES] ;
[[bannerViewControUer view] setHidden:YES];}
-(BOOL)bannerViewActionShouldBegin:(ADBannerView *)banner willLeave Application: (B OOL) willLeave {
[bannerViewControUer layoutForCurrentOrientation: YES] ;
[self pauseShow];
return YES;}
-(void)bannerViewActionDidFinish:(ADBannerView *)banner { [self resumeShow] ; }
#pragma mark - #pragma mark UIWebViewDelegate methods
-(void)webViewDidFinishLoad:(UIWebView *)webView {
[self pauseShow] ;
[[bannerViewController banner] setHidden: YES] ;
[[infoViewController view] setHidden:NO] ; }
- (void)web View: (UlWeb View *)webView didFailLoadWithError:(NSError [[bannerViewController banner] setHidden:NO] ;
[[infoViewController view] setHidden: YES] ;
[playerControls infoDismissed] ;
[self resumeShow] ; }
#pragma mark -
#pragma mark application lifecycle
// check the orientation
-(void)orientationChanged:(NSNotification*)notification {
UIDeviceOrientation orientation = [[UIDevice currentDevice] orientation] ; if (orientation == UIDeviceOrientationPortrait II
orientation == UIDeviceOrientationPortraitUpsideDown II
orientation == UIDeviceOrientationLandscapeLeft II
orientation == UIDeviceOrientationLandscapeRight){
lastOrientation = orientation; }
if ((lastOrientation == UIDeviceOrientationPortrait II lastOrientation
== UIDeviceOrientationPortraitUpsideDown) && playerControls. shownOnS
==NO) { [self showPlayerControls] ; } }
// Override to allow orientations other than the default portrait orientation.
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation) interfaceOrientation {return YES; } - (void)willAnimateRotationToInterf aceOrientation : (Ullnterf aceOrientation) toInterfaceOrientation duration: (NSTimelnterval)duration {
if (toInterfaceOrientation == Ullnterf aceOrientationPortrait II
toInterfaceOrientation == UIInterfaceOrientationPortraitUpsideDown) { [playerControls setControlsOnScreenPortrait] ;
[bannerViewController setControlsOnScreenPortrait] ; }
else { [playerControls setControlsOnScreenLandscape] ;
[bannerViewController setControlsOnScreenLandscape] ; } }
#pragma mark -
#pragma mark application lifecycle
- (void)didReceiveMemoryWarning {
// Releases the view if it doesn't have a superview.
[super didReceiveMemory Warning] ;
// Release any cached data, images, etc that aren't in use. }
- (void)viewDidUnload {
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil; }
- (void)dealloc { [super dealloc] ; }
@end
"UserSessionManager" implementation code finished.
[68] "StreamProducer" class is invoked by "UserSessionManager" to produce the two main alternating objects for presenting video to the user. It contains the "Gesture Mapper" implementation that in the preferred embodiment is responsible for mapping the appropriate animations generated by the "Animation Engine" to user gesture actions. Methods named "- (void)animateContent(FromLeft/FromRight/FronTop/FromBottom)" to a transition happening from a first player to a second player.
[69] "StreamProducer" header file code is as follows.
[70] "StreamProducer" header code starts below:
//
// StreamProducer .h
// iPov3
//
// Created by Antonio Rossi on 01/01/11.
// Copyright 2011 Yoctle Limited Limited. All rights reserved.
//
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
typedef enum {
UserWantsCameraSwitchUp = 0,
UserWantsCameraSwitchRight,
UserWantsCameraSwitchDown,
UserWantsCameraSwitchLeft,
UserWantsCameraSwitchMAX,
} UserWantsCameraSwitch;
©class SceneDescriptor;
©class StreamConsumer;
/*
This class manage a UlView having a StreanConsumer object that loads content by different sources loaded by class Theatre Descriptor
*/
©interface StreamProducer : UlViewControUer <UIGestureRecognizerDelegate> { NS Mutable Array *FeedDistributor;
int numberOfVideoFeeds; int current VideoFeed;
int new VideoFeed;
StreamConsumer *videoFeed;
AVPlayer *streamProducer;
AVPlayerltem *streamReader;
UserWantsCameraS witch userWantsCameraS witch;
CATransition *animation;
UIDeviceOrientation lastOrientation;
BOOL gotSwipe;}
©property (nonatomic, retain) AVPlayer *streamProducer; ©property (nonatomic, retain) AVPlayerltem *streamReader; ©property (nonatomic, retain) IBOutlet UIButton *playButton; ©property int numberOfVideoFeeds;
©property int current VideoFeed;
©property int new VideoFeed;
©property UserWantsCameraS witch userWantsCameraS witch; ©property (nonatomic, retain) CATransition *animation;
©property (nonatomic) BOOL gotSwipe;
- (void)animateContent;
- (void)animateContentFromLeft;
- (void)animateContentFromRight;
- (void)animateContentFromTop ;
-(void)animateContentFromBottom;
-(void)syncUI;
-(void)loadStage;
- (void)f ireTouch ;
- (AVPlayerltem* ) getFeed: (int) aFeed;
©end
"StreamProducer" header code finished. [71] "StreamProducer" is a derived class of UlViewController; is function is such that when one of the objects is presented on screen it is designated as the first responder to user driven events, as a consequence it manages the user interaction in the given coordinates system (method "-(void)gestureMappen(UISwipeGestureRecognizer *)recognizer".
[72] Furthermore, utilizing the information relative to the device orientation,
"StreamProducer" also manages the required animations needed when the user is moving (choosing a different vantage point of view) to another feed, the logic of which is defined in the - (void)animateContent(FromLeft/FromRight/FronTop/ FromBottom) methods .
[73] "StreamProducer" implementation code is as follows:
[74] "StreamProducer" implementation code starts below:
//
// StreamProducer.m
// iPov3
//
// Created by Antonio Rossi on 01/01/11.
// Copyright 2011 Yoctle Limited Limited. All rights reserved.
//
#import "StreamProducer.h"
#import "StreamConsumer.h"
#import "SceneDescriptor.h"
©implementation StreamProducer
@ synthesize StreamProducer;
@ synthesize streamReader;
©synthesize playButton;
©synthesize numberOfVideoFeeds;
© synthesize current VideoFeed;
© synthesize new VideoFeed;
© synthesize userWantsCameraS witch;
©synthesize animation; ©synthesize gotSwipe;
// Define this constant for the key-value observation context.
static const NSString *ItemStatusContext;
#pragma mark -
#pragma mark initialization
// Implement loadView to create a view hierarchy programmatically, without using a nib.
- (void)loadView {
[super loadView];
[self loadS tage];
self. view = [[UlView alloc] initWithFrame:[UIScreen
mainScreen] .applicationFrame] ;
self, view = videoFeed;
gotSwipe = NO;
NSLog(@ "This player has point of view: d", currentVideoFeed);}
// Implement viewDidLoad to do additional setup after loading the view, typically from a nib.
- (void)viewDidLoad {
[super viewDidLoad];
UISwipeGestureRecognizer *gestureRecognizer;
gestureRecognizer = [[UISwipeGestureRecognizer alloc]
initWithTarget:self action: @ selector(gestureMappen)] ;
[gestureRecognizer setDirection: (UIS wipeGestureRecognizerDirectionRight)] ; [[self view] addGestureRecognizer: gestureRecognizer];
[gestureRecognizer release];
gestureRecognizer = [[UISwipeGestureRecognizer alloc] initWithTarget:self action: @ selector(gestureMappen)] ;
[gestureRecognizer setDirection: (UIS wipeGestureRecognizerDirectionUp)] ; [[self view] addGestureRecognizer: gestureRecognizer];
[gestureRecognizer release];
gestureRecognizer = [[UISwipeGestureRecognizer alloc] initWithTarget:self action: @ selector(gestureMappen)] ;
[gestureRecognizer setDirection: (UIS wipeGestureRecognizerDirectionDown)]; [[self view] addGestureRecognizer: gestureRecognizer];
[gestureRecognizer release];
gestureRecognizer = [[UISwipeGestureRecognizer alloc] initWithTarget:self action: @ selector(gestureMappen)] ;
[gestureRecognizer setDirection: (UIS wipeGestureRecognizerDirectionLeft)] ; [[self view] addGestureRecognizer: gestureRecognizer];
[gestureRecognizer release];
// register for orientation notification and set the initial orientation
[[NSNotificationCenter defaultCenter] addObservenself
selector: @ selector(orientationChanged:)
name: @ "UIDeviceOrientationDidChangeNotification" object:nil] ;
lastOrientation = UIDeviceOrientationPortrait;
self.wantsFullScreenLayout = YES;}
-(void)loadStage {
if (self.numberOfVideoFeeds < 1) {
NSLog(@ "creating a PointOfViewController");
// initialize the properties
SceneDescriptor *stage = [[SceneDescriptor alloc]
initWithStageFiles] ;
FeedDistributor = [[NS Mutable Array arrayWithArray: [stage
stageFeedDistributor]] retain];
numberOfVideoFeeds = stage. numberOfVideoFeeds;
currentVideoFeed = stage. initialVideoFeed; streamReader = [AVPlayerltem alloc] ;
self.streamReader = [FeedDistributor
obj ectAtlndex: current VideoFeed] ;
NSAssert([streamReader isKindOfClass: [AVPlayerltem class]], @ "not a player item");
[streamReader addObservenself forKeyPath: ® "status" options:0
context:&ItemStatusContext] ;
self.streamProducer = [AVPlayer
playerWithPlayerltem: streamReader] ;
videoFeed = [[StreamConsumer alloc] initWithFrame:[UIScreen
mainScreen] .applicationFrame] ;
[videoFeed setStreamRenderer:streamProducer] ;
NSLog(@ "PointOfViewController:viewDidLoad executed [player play]");
[stage autorelease] ; } }
- (AVPlayerltem* ) getFeed: (int) aFeed {
AVPlayerltem *newPointOfView = [FeedDistributor obj ectAtlndex: aFeed]; return newPointOfView; }
- (void) ob serve ValueForKeyPath:(NS String *)keyPath ofObject:(id)object change: (NS Dictionary *)change context: (void *)context {
if (context == &ItemStatusContext) {
[self syncUI];
return;}
[super observeValueForKeyPath:keyPath ofObject:object change:change context:context];
return;}
- (void)syncUI { if ((streamProducer.currentltem != nil) &&
([streamProducer.currentltem status] == AVPlayerltemStatusReadyToPlay)) { playButton.enabled = YES;}
else {playButton.enabled = NO;} }
#pragma mark -
#pragma mark user interaction
- (void)gestureMapper:(UISwipeGestureRecognizer *)recognizer {
// if (!gotSwipe) {
NSLog(@ "got a swipe");
gotSwipe = YES;
switch (lastOrientation) {
case UIDeviceOrientationPortrait:
switch (recognizer.direction) {
case UISwipeGestureRecognizerDirectionUp:
userWantsCameraS witch = UserWantsCameraSwitchUp;
[self animateContentFromTop] ;
break;
case UISwipeGestureRecognizerDirectionRight:
userWantsCameraS witch = UserWantsCameraS witchRight;
[self animateContentFromLeft] ;
break;
case UISwipeGestureRecognizerDirectionDown:
userWantsCameraS witch = UserWantsCameraS witchDown;
[self animateContentFromBottom];
break;
case UISwipeGestureRecognizerDirectionLeft:
userWantsCameraS witch = UserWantsCameraS witchLeft; [self animateContentFromRight] ;
break;
default:
break; }
break;
case UIDeviceOrientationPortraitUpsideDown:
switch (recognizer.direction) {
case UISwipeGestureRecognizerDirectionUp:
userWantsCameraS witch = UserWantsCameraSwitchUp;
[self animateContentFromBottom];
break;
case UISwipeGestureRecognizerDirectionRight:
userWantsCameraS witch = UserWantsCameraS witchRight; [self animateContentFromRight];
break;
case UISwipeGestureRecognizerDirectionDown:
userWantsCameraS witch = UserWantsCameraS witchDown; [self animateContentFromTop] ;
break;
case UISwipeGestureRecognizerDirectionLeft:
userWantsCameraS witch = UserWantsCameraS witchLeft;
[self animateContentFromLeft] ;
break;
default:
break; }
break;
case UIDeviceOrientationLandscapeRight:
switch (recognizer.direction) {
case UISwipeGestureRecognizerDirectionUp: userWantsCameraS witch = UserWantsCameraSwitchUp;
[self animateContentFromRight] ;
break;
case UISwipeGestureRecognizerDirectionRight:
userWantsCameraS witch = UserWantsCameraS witchRight; [self animateContentFromTop] ;
break;
case UISwipeGestureRecognizerDirectionDown:
userWantsCameraS witch = UserWantsCameraS witchDown; [self animateContentFromLeft] ;
break;
case UISwipeGestureRecognizerDirectionLeft:
userWantsCameraS witch = UserWantsCameraS witchLeft;
[self animateContentFromBottom];
break;
default:
break; }
break;
case UIDeviceOrientationLandscapeLeft:
switch (recognizer.direction) {
case UISwipeGestureRecognizerDirectionUp:
userWantsCameraS witch =
UserWantsCameraSwitchUp;
[self animateContentFromLeft];
break;
case UISwipeGestureRecognizerDirectionRight:
userWantsCameraS witch = UserWantsCameraS witchRight; [self animateContentFromBottom];
break; case UISwipeGestureRecognizerDirectionDown:
userWantsCameraS witch = UserWantsCameraSwitchDown;
[self animateContentFromRight] ;
break;
case UISwipeGestureRecognizerDirectionLeft:
userWantsCameraS witch = UserWantsCameraS witchLeft;
[self animateContentFromTop] ;
break;
default:
break; }
break;
default:
userWantsCameraS witch = UserWantsCameraS witchMAX;
break; }// } }
// check the orientation
-(void)orientationChanged:(NSNotification*)notification {
UIDeviceOrientation orientation = [[UIDevice currentDevice] orientation] ; if (orientation == UIDeviceOrientationPortrait
orientation == UIDeviceOrientationPortraitUpsideDown
orientation == UIDeviceOrientationLandscapeLeft
orientation == UIDeviceOrientationLandscapeRight)
{lastOrientation = orientation;
NSLog(@ "point of view orientation has changed, now we are in d", lastOrientation); } }
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { if (!gotSwipe) {NSUInteger numTaps = [[touches anyObject] tapCount] ; switch (numTaps) { case 1:
NSLog(@ "user touched with 1 touch");
[self performs elector: @ selector(fireTouch)
withObject:nil afterDelay:0.2];
break;
case 2:
NSLog(@ "user touched with 2 touch");
break;
case 3:
NSLog(@ "user touched with 3 touch");
break;
case 4:
NSLog(@ "user touched with 4 touch");
break;
default:
NSLog(@ "user touched with more than 4 touches");
break; } } }
-(void)fireTouch {
// if user gave a swipe or we are in portrait cancel
if (!gotSwipe && (lastOrientation == UIDeviceOrientationLandscapeLeft II lastOrientation == UIDeviceOrientationLandscapeRight)) {
NSLog(@ "point of view orientation is d, we can post
userWantsPlayerControls " , lastOrientation) ;
[[NSNotificationCenter defaultCenter]
postNotificationName: © "userWantsPlayerControls" object:self]; } } #pragma mark -
#pragma mark transition animation -(void)animateContent {
animation = [CATransition animation] ;
[animation setDuration:0.5];
[animation setType:kCATransitionPush] ;
[animation setSubtype:kCATransitionFromBottom] ;
[animation setTimingFunction: [CAMediaTimingFunction
functionWithName:kCAMediaTimingFunctionEaseInEaseOut] ] ;
//[[self. view. superview layer] addAnimation: animation forKey: @ "SwitchToViewl"];
[[NSNotificationCenter defaultCenter]
postNotificationName: @ "switchCamera" object: self] ;
NSLog(@ "notification sent from PointOfViewControllenswitchCamera"); }
-(void)animateContentFromLeft {
animation = [CATransition animation] ;
[animation setDuration:0.5];
[animation setType:kCATransitionPush] ;
[animation setSubtype:kCATransitionFromLeft] ;
[animation setTimingFunction: [CAMediaTimingFunction
functionWithName:kCAMediaTimingFunctionEaseInEaseOut] ] ;
animation.removedOnCompletion = YES;
[[NSNotificationCenter defaultCenter]
postNotificationName: @ "switchCamera" object: self] ;
NSLog(@ "notification sent from PointOfViewControllenswitchCamera"); }
-(void)animateContentFromRight {
animation = [CATransition animation] ;
[animation setDuration:0.5];
[animation setType:kCATransitionPush] ;
[animation setSubtype:kCATransitionFromRight] ; [animation setTimingFunction: [CAMediaTimingFunction
functionWithName:kCAMediaTimingFunctionEaseInEaseOut] ] ;
animation.removedOnCompletion = YES;
[[NSNotificationCenter defaultCenter]
postNotificationName: @ "switchCamera" object: self] ;
NSLog(@ "notification sent from PointOfViewController:switchCamera"); }
-(void)animateContentFromTop {
animation = [CATransition animation] ;
[animation setDuration:0.5];
[animation setType:kCATransitionPush] ;
[animation setSubtype:kCATransitionFromTop] ;
[animation setTimingFunction: [CAMediaTimingFunction
functionWithName:kCAMediaTimingFunctionEaseInEaseOut] ] ;
animation.removedOnCompletion = YES;
[[NSNotificationCenter defaultCenter]
postNotificationName: @ "switchCamera" object: self] ;
NSLog(@ "notification sent from PointOfViewController:switchCamera"); }
-(void)animateContentFromBottom {
animation = [CATransition animation] ;
[animation setDuration:0.5];
[animation setType:kCATransitionPush] ;
[animation setSubtype:kCATransitionFromBottom] ;
[animation setTimingFunction: [CAMediaTimingFunction
functionWithName:kCAMediaTimingFunctionEaseInEaseOut] ] ;
animation.removedOnCompletion = YES;
[[NSNotificationCenter defaultCenter]
postNotificationName: @ "switchCamera" object: self] ; NSLog(@ "notification sent from PointOfViewController:switchCamera"); } #pragma mark -
#pragma mark application life cycle
- (void)playerItemDidReachEnd:(NSNotification *)notification {
//[player seekToTime:kCMTimeZero]; }
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation) interfaceOrientation {
// Overriden to allow any orientation,
return YES; }
- (void)didReceiveMemoryWarning {
// Releases the view if it doesn't have a superview.
[super didReceiveMemory Warning] ;
// Release any cached data, images, etc. that aren't in use. }
- (void)viewDidUnload {
[super viewDidUnload] ;
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil; }
- (void)dealloc {
[super dealloc];}
@end
"StreamProducer" header code and implementation code finished.
A further element of the GUI which gives additional choices for a user giving a panel of player controls, which manages the presentation on screen of thumbnails related to the available video feeds, this element is managed by a class named "PlayerControlsViewControUer".
[76] "PlayerControlsViewControUer" header file is as follows.
[77] "PlayerControlsViewControUer" header file code begins below.
//
// PlayerControlsViewControUer .h
// iPov3
//
// Created by Antonio Rossi on 07/01/11.
// Copyright 2011 Yoctle Limited Limited. All rights reserved.
//
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreMedia/CoreMedia.h>
#import "Global.h"
©class SceneDescriptor;
©class Single Audio;
©interface PlayerControlsViewControUer : UlViewControUer {
CGFloat playerControlsHeight;
int pointsOfView;
NS Timer *animationUpdate;
Single Audio *sharedS ingle Audio;
// array container for player items, using thumbnails property of class Stage in this class
NS Mutable Array *thumbnailsAssets;
BOOL shownOnScreen;
BOOL thumbnailsAreClean;
BOOL demoInfoRequested;
int thumbNailsUpdateFrequency; int generateThumbnailAtlndex;
NSArray *imageGenerators;
NS Mutable Array *splashThumbnailsImages Array;
NSArray *visibleThumbnailsArray;
AVAssetlmageGenerator *thumbZeroGenerator;
AVAssetlmageGenerator *thumbOneGenerator;
AVAssetlmageGenerator *thumbTwoGenerator;
AVAssetlmageGenerator *thumbThreeGenerator;
AVAssetlmageGenerator *thumbFourGenerator;
IBOutlet UllmageView *thumbZero;
IBOutlet UllmageView *thumbOne;
IBOutlet UllmageView *thumbTwo;
IBOutlet UllmageView *thumbThree;
IBOutlet UllmageView *thumbFour;
IBOutlet UIButton *demoInfoButton;
Ullmage *iPovLogo;
SceneDescriptor *stage;}
©property (nonatomic) BOOL shownOnScreen;
©property (nonatomic, retain) NSArray *imageGenerators;
©property (nonatomic, retain) NS Mutable Array *thumbnailsAssets;
©property (nonatomic, retain) NSArray *visibleThumbnailsArray;
©property (nonatomic) BOOL thumbnailsAreClean;
©property (nonatomic, retain) SceneDescriptor *stage;
©property (nonatomic, retain) IBOutlet UIButton *demoInfoButton;
-(IBAction)pauseButton:(id)sender;
-(IBAction)playButton:(id)sender;
-(IBAction)demoInfoButton:(id)sender;
- (IB Action)rewindButton : (id) sender;
-(IBAction)contentInfoButton:(id)sender; -(void)loadStage;
-(void)setlPOV;
- (void) sho wChoo senPointOfV iew : (int)pointOfV iew ;
-(void)setControlsOnScreenPortrait;
-(void)setControlsOnScreenLandscape;
- (void) getPo sitionOnS creen ;
- (void)updateThumbnails S tartingFromlndex : (int)index ;
- (void)inf oDismis sed;
@end
"PlayerControlsViewController" header file code finished.
[78] A couple of methos of "PlayerControlsViewController" (-
(void)setControlsOnScreenPortrait / -(void) setControlsOnScreenLandscape) are responsible for animating the view and managing the thumbnails along with other buttons available to user interaction, such for example "play" and "pause". In the current implementation this view represents only a portion of the visible area which, in some circumstances, overlaps the user selected video feed; a particular care is then taken to dock it constantly in a position which poses minimum interference with the principal view (the selected feed served to the user).
[79] The iPad has a limit regarding the number of views playing videos that can be presented contemporary on the screen device, so to manage an unlimited number of thumbnail related to the desired feeds which is desiderable to allow the user to choose, in
"PlayerControlsViewController" a method is present named
"(void)updateThumbnailsStartingFromIndex:(int)index" which generates an image for a given thumbnail at a given showtime using an asynchronously recursive algorithm, and assign it to the
UllmageView visible area for that thumbnail. Subsequently the algorithm recursively proceeds with the following required thumbnails at the given index. This procedure allows the capability of generating a nearly unlimited number thumbnails for the available feeds without incurring in the inherent limits of the video player for what is possible to show to the user at a certain time.
[80] "PlayerControlsViewController" class implementation code is as follows.
[81] "PlayerControlsViewController" implementation code starts below. //
// PlayerControlsViewController.nl
// C-iPov
//
// Created by Antonio Rossi on 07/01/11.
// Copyright 2011 Yoctle Limited Limited. All rights reserved.
//
#import "PlayerControlsViewController.h"
#import "Global.h"
#import "PlayerControlsView.h"
#import "SceneDescriptor.h"
#import "SingleAudio.h"
@ implementation PlayerControls ViewControUer
©synthesize shownOnScreen;
©synthesize imageGenerators;
©synthesize stage;
©synthesize thumbnailsAssets;
©synthesize thumbnailsAreClean;
© synthesize visibleThumbnails Array;
©synthesize demoInfoButton;
#pragma mark -
#pragma mark initialization
-(void)loadStage {
// initialize the properties
stage = [[SceneDescriptor alloc] initWithStageFiles];
thumbnailsAssets = [[stage stageThumbnailsStreamsReaders] copy]; [stage release];
stage = nil;
splashThumbnailsImagesArray = [NS Mutable Array arrayWithCapacity:CAMERAS_ON_STAGE];
[splashThumbnailsImagesArray retain] ;
NS Mutable Array *temporaryImageGenerators = [NS Mutable Array
arrayWithCapacity:CAMERAS_ON_STAGE];
for (int i = 0; i < CAMERAS_ON_STAGE; i++) {
NSLog(@ "starting generating content for thumbnails");
[splashThumbnailsImagesArray addObject: [Ullmage
imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource: @ "iPOV-43" ofType: @ "png"]]];
[temporary ImageGenerators addObject: [AVAssetlmageGenerator
assetImageGeneratorWithAsset:[thumbnailsAssets objectAtlndexd]]];}
imageGenerators = [[[NS Array alloc]
initWithArray:temporaryImageGenerators] copy] ;
NSLog(@ "thumbnails array have been allocated");
thumbNailsUpdateFrequency = 1 ;
iPovLogo = [Ullmage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource: @ "iPOV-43" ofType: @ "png"]] ;
[iPovLogo retain];
[self setlPOV];}
-(void)setlPOV {NSLog(@ "now thumbnails are iPov logo image");
thumbnailsAreClean = YES;
if (visibleThumbnailsArray == nil) {
[thumbZero setlmagedPovLogo];;
[thumbOne setlmagedPovLogo];
[thumbTwo setlmagedPovLogo];
[thumbThree setlmagedPovLogo];
[thumbFour setlmagedPovLogo];
visibleThumbnailsArray = [[NSArray alloc] initWithObjects:thumbZero, thumbOne, thumbTwo, thumbThree, thumbFour,nil] ; } //visibleThumbnailsArray = [[NSArray alloc] initWithObjects:thumbZero, thumbOne, thumbTwo, thumbThree, thumbFour,nil] ; }
-(void)viewDidLoad { [super viewDidLoad] ;
playerControlsHeight = PLAYER_CONTROLS_HEIGHT;
sharedS ingle Audio = [SingleAudio sharedSingleAudio];
animationUpdate = [NSTimer scheduledTimerWithTimeInterval:0.5
target:self selector: @selector(updateAnimation:) userlnfo il repeats:YES];
thumbnailsAreClean = YES;
demoInfoRequested = NO;}
#pragma mark -
#pragma mark update animations
- (void)update Animation : (NS Timer* ) theTimer {
if (shownOnScreen) {
//NSLog(@ "entering in thumbnails generation algorithm");
generateThumbnailAtlndex = 0;
[self updateThumbnailsStartingFromIndex:0] ; }
if (! shownOnScreen && !thumbnailsAreClean) {
NSLog(@ "IPOV IPOV IPOV IPOV IPOV IPOV IPOV IPOV ");
[self setlPOV];}
if (demoInfoRequested) {
if (demoInfoButton. state == UIControlStateNormal) {
demoInfoButton.highlighted = YES;
} else if (demoInfoButton. state == UIControlStateHighlighted) {
demoInfoButton.highlighted = NO;} } }
#pragma mark - #pragma mark thumbnails view management
- (void) sho wChoo senPointOfV iew : (int)pointOfV iew {
for (int i = 0; i < [visibleThumbnailsArray count]; i++) {
[[visibleThumbnails Array obj ectAtlndex :i] setHighlighted:NO];}
[[visibleThumbnailsArray objectAtIndex:pointOfView]
setHighlightedlmage: [Ullmage imageWithContentsOfFile: [[NS Bundle mainBundle] pathForResource: @ "iPOV-43" ofType: @ "png"]]] ;
[[visibleThumbnailsArray objectAtIndex:pointOfView] setHighlighted: YES] ; }
- (void)updateThumbnails S tartingFromlndex : (int)index {
if (generateThumbnailAtlndex < CAMERAS_ON_STAGE) {
CMTime showTime = [[SingleAudio sharedSingleAudio] currentTime] ;
NS Array *frameAtShowTime = [NS Array arrayWithObjects:[NS Value
value WithCMTime: showTime], nil];
//NSLog(@ "recursive algorithm for thumbnails generation working
on thumbnail number: d", index);
[ [imageGenerator s obj ect Atlndex : index]
generateCGImagesAsynchronouslyForTimes:frameAtShowTime
completionHandler:A(CMTime requestedTime, CGImageRef
image,
CMTime actualTime,
AVAssetlmageGeneratorResult result, NSError *error){
//NSLog(@ "evaluating thumbnails images");
if (result == AVAssetlmageGeneratorSucceeded) {
//NSLog(@ "could generate an image for thumbnail at
index: d", index);
[splashThumbnailsImagesArray
replaceObjectAtlndexdndex withObject: [Ullmage imageWithCGImage: image] ] ;
[[visibleThumbnailsArray obj ectAtlndex: index]
setlmage: [splashThumbnailsImagesArray obj ectAtlndex: index]]; generateThumbnail Atlndex++ ;
if (generateThumbnailAtlndex <= CAMERAS_ON_STAGE) {
[self updateThumbnailsStartingFromlndexigenerateThumbnailAtlndex] ; } } if (result == AVAssetlmageGeneratorFailed) {
//NSLog(@ "could not generate an image for thumbnail
at index: d:, error: @ ", index, error);
generateThumbnail Atlndex++ ;
if (generateThumbnailAtlndex <= CAMERAS_ON_STAGE) {
[self updateThumbnailsStartingFromlndexigenerateThumbnailAtlndex] ; } } if (result == AVAssetlmageGeneratorCancelled) {
//NSLog(@ "image generator canceled for thumbnail at
index: d, reason: @ ", index, error);
generateThumbnail Atlndex++ ;
if (generateThumbnailAtlndex <= CAMERAS_ON_STAGE) {
[self updateThumbnailsStartingFromlndexigenerateThumbnailAtlndex] ;} } }];} else {
//NSLog(@ "placing thumbnails");
//NSLog(@ "thumbnails generated");
thumbnails AreClean = NO;} }
#pragma mark -
#pragma mark user interface rotation management
-(void)getPositionOnScreen {CGPoint origin = self. view. frame. origin;
CGSize size = self .view. frame. size;CGPoint center = self. view. center;
NSLog(@ "playerControlsView positioning is as follows, x: %f - y: %f - width: f - height: f, center.x: f, center.y: f", origin.x, origin. y, size.width, size.height, center.x, center. y);} -(void)setControlsOnScreenPortrait {CGPoint super Viewcenter =
self. view. superview.center;CGRect screenRect= [[UlScreen mainScreen] bounds]; CGPoint newCenter = CGPointMake(superViewcenter.x,screenRect.size.height - playerControlsHeight / 2);[self.view
setCenter:newCenter];NSLog(@ "playerControlsView has been setup in portrait"); }
-(void)setControlsOnScreenLandscape {
CGPoint superViewcenter = self. view. superview.center;
//CGRect screenRect= [[UlScreen mainScreen] bounds];
//CGPoint newCenter = CGPointMake(superViewcenter.y,
screenRect.size. width - playerControlsHeight / 2);
CGPoint newCenter = CGPointMake(superViewcenter.y, playerControlsHeight/ 2); [self. view setCenter: newCenter];
NSLog(@ "playerControlsView has been setup in landscape");}
#pragma mark -
#pragma mark user interaction
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
if ([touch view] == thumbZero) { [[NSNotificationCenter defaultCenter] postNotificationName: @ "thumbZero" object: self] ; }
else if ([touch view] == thumbOne) { [[NSNotificationCenter defaultCenter] postNotificationName: @ "thumbOne" object:self] ; }
else if ([touch view] == thumbTwo) { [[NSNotificationCenter defaultCenter] postNotificationName: @ "thumbTwo" object:self] ; }
else if ([touch view] == thumbThree) { [[NSNotificationCenter defaultCenter] postNotificationName: @ "thumbThree" object:self] ; }
else if ([touch view] == thumbFour) { [[NSNotificationCenter defaultCenter] postNotificationName: @ "thumbFour" object: self] ; } NSLog(@ "user clicked a thumbnail");}
- (IBAction)playButton:(id)sender {
NSLog(@ "user clicked play");
[[NSNotificationCenter defaultCenter]
postNotificationName: @ "playButton" object: self] ;
}
-(IBAction)pauseButton:(id)sended {NSLog(@ "user clicked pause");
[[NSNotificationCenter defaultCenter] postNotificationName: @ "pauseButton" object: self]; }
-(IBAction)demoInfoButton:(id)sender {demoInfoRequested = YES;
[[NSNotificationCenter defaultCenter] postNotificationName: @ "demoInfoButton" object: self]; }
-(void)infoDismissed {demoInfoRequested = NO; [demoInfoButton
setHighlighted:NO];}
-(IBAction)contentInfoButton:(id)sender { [[NSNotificationCenter defaultCenter] postNotificationName: @ "contentlnfoButton" object: self] ; }
- (IB Action)rewindButton: (id) sender { [[NSNotificationCenter defaultCenter] postNotificationName: @ "rewindButton" object:self] ; }
#pragma mark -
#pragma mark application lifeCycle
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation) interfaceOrientation {
// Overriden to allow any orientation. return YES;}
- (void)didReceiveMemoryWarning {
// Releases the view if it doesn't have a superview.
[super didReceiveMemory Warning] ;
// Release any cached data, images, etc. that aren't in use. }
- (void)viewDidUnload { [super viewDidUnload] ;
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil;}
- (void)dealloc { [super dealloc];}
@end
"PlayerControlsViewController" implementation code finished.
[82] What has been described is a new and improved system and method for a remote control for portable electronic devices that is simple operate and operable with a single hand, overcoming the limitations and disadvantages inherent in the related art.
[83] Although the present invention has been described with a degree of particularity, it is understood that the present disclosure has been made by way of example. As various changes could be made in the above description without departing from the scope of the invention, it is intended that all matter contained in the above description or shown in the accompanying drawings shall be illustrative and not used in a limiting sense.

Claims

What is claimed is:
1. A method of manipulating a audio video visualization in a multi dimensional virtual environment implemented in a computer system having a display unit for displaying the virtual environment and a gesture driven interface, said method manipulating the visualization in response to predetermined user gestures and movements identified by the gesture driven interface, comprising the steps of:
receiving user gestural input by a capable hardware device;
a client software object capable of playing a plurality of multimedia streaming sources;
said multimedia streaming sources are corresponding to digitally encoded files related to an event;
said multimedia streaming sources corresponding to different viewpoints of said event;
said viewpoints having means for a connection graph related to their positioning in space;
said software initially playing a selection of a first multimedia streaming source from said plurality of multimedia streaming sources;
said client software object having means for uninterrupted switching from said initially playing selection of a first multimedia streaming source to a new selection of a new multimedia streaming source selected from said plurality of multimedia streaming sources;
said client software object having means for receiving a switch request; said client software object having means for relating said gestural input to said switch request;
said client object having means for relating user gestures to said connection graph; and
said client object having means for visualizing transitions in space among said multimedia streaming sources, said transitions related to said connection graph so as to visualize a transition, upon receiving said gestural input for said switch request to said new selection of said new multimedia streaming source, using said connection graph and performing an uninterrupted switching of said multimedia streaming sources.
2. The method of claim 1, wherein said transitions comprise tridimensional transformations.
3. The method of claim 1, wherein said plurality of multimedia streaming sources comprise at least an audio content.
4. The method of claim 1, wherein said client software object is receiving at least two video streaming sources for animating transition.
5. The method of claim 4, wherein said user gestures comprise swipe gestures.
6. The method of claim 5, wherein a single audio perform a basis synchronization for said plurality of multimedia streaming sources.
7. The method of claim 6, wherein said transitions are animated in a planar fashion relative to said computer system device screen.
8. The method of claim 1, wherein said connection graph comprise camera position 3D coordinates.
9. The method of claim 1, wherein said plurality of multimedia streaming sources are accessed by said client software object over a network.
PCT/US2012/022088 2011-01-22 2012-01-20 Dynamic 2d and 3d gestural interfaces for audio video players capable of uninterrupted continuity of fruition of audio video feeds WO2012100202A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/981,058 US20130332829A1 (en) 2011-01-22 2012-01-20 Dynamic 2d and 3d gestural interfaces for audio video players capable of uninterrupted continuity of fruition of audio video feeds
US15/000,361 US20160239095A1 (en) 2011-01-22 2016-01-19 Dynamic 2D and 3D gestural interfaces for audio video players capable of uninterrupted continuity of fruition of audio video feeds

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161435277P 2011-01-22 2011-01-22
US61/435,277 2011-01-22

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US13/981,058 A-371-Of-International US20130332829A1 (en) 2011-01-22 2012-01-20 Dynamic 2d and 3d gestural interfaces for audio video players capable of uninterrupted continuity of fruition of audio video feeds
US15/000,361 Continuation US20160239095A1 (en) 2011-01-22 2016-01-19 Dynamic 2D and 3D gestural interfaces for audio video players capable of uninterrupted continuity of fruition of audio video feeds

Publications (1)

Publication Number Publication Date
WO2012100202A1 true WO2012100202A1 (en) 2012-07-26

Family

ID=46516125

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/022088 WO2012100202A1 (en) 2011-01-22 2012-01-20 Dynamic 2d and 3d gestural interfaces for audio video players capable of uninterrupted continuity of fruition of audio video feeds

Country Status (2)

Country Link
US (2) US20130332829A1 (en)
WO (1) WO2012100202A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11357772B2 (en) 2015-03-06 2022-06-14 Auspex Pharmaceuticals, Inc. Methods for the treatment of abnormal involuntary movement disorders
US11666566B2 (en) 2012-09-18 2023-06-06 Auspex Pharmaceuticals, Inc. Formulations and pharmacokinetics of deuterated benzoquinoline inhibitors of vesicular monoamine transporter 2

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8990689B2 (en) * 2011-02-03 2015-03-24 Sony Corporation Training for substituting touch gestures for GUI or hardware keys to control audio video play
US9047005B2 (en) 2011-02-03 2015-06-02 Sony Corporation Substituting touch gestures for GUI or hardware keys to control audio video play
US20120260290A1 (en) 2011-04-07 2012-10-11 Sony Corporation User interface for audio video display device such as tv
CN109426496B (en) * 2017-08-31 2021-11-26 武汉斗鱼网络科技有限公司 Method for writing program log into file, storage medium, electronic device and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090061837A1 (en) * 2007-09-04 2009-03-05 Chaudhri Imran A Audio file interface
US20090174677A1 (en) * 2008-01-06 2009-07-09 Gehani Samir B Variable Rate Media Playback Methods for Electronic Devices with Touch Interfaces
US20090303231A1 (en) * 2008-06-09 2009-12-10 Fabrice Robinet Touch Screen Device, Method, and Graphical User Interface for Manipulating Three-Dimensional Virtual Objects
US20100271367A1 (en) * 2009-04-22 2010-10-28 Sony Computer Entertainment America Inc. Method and apparatus for combining a real world event and a computer simulation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090061837A1 (en) * 2007-09-04 2009-03-05 Chaudhri Imran A Audio file interface
US20090174677A1 (en) * 2008-01-06 2009-07-09 Gehani Samir B Variable Rate Media Playback Methods for Electronic Devices with Touch Interfaces
US20090303231A1 (en) * 2008-06-09 2009-12-10 Fabrice Robinet Touch Screen Device, Method, and Graphical User Interface for Manipulating Three-Dimensional Virtual Objects
US20100271367A1 (en) * 2009-04-22 2010-10-28 Sony Computer Entertainment America Inc. Method and apparatus for combining a real world event and a computer simulation

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11666566B2 (en) 2012-09-18 2023-06-06 Auspex Pharmaceuticals, Inc. Formulations and pharmacokinetics of deuterated benzoquinoline inhibitors of vesicular monoamine transporter 2
US11357772B2 (en) 2015-03-06 2022-06-14 Auspex Pharmaceuticals, Inc. Methods for the treatment of abnormal involuntary movement disorders
US11446291B2 (en) 2015-03-06 2022-09-20 Auspex Pharmaceuticals, Inc. Methods for the treatment of abnormal involuntary movement disorders
US11564917B2 (en) 2015-03-06 2023-01-31 Auspex Pharmaceuticals, Inc. Methods for the treatment of abnormal involuntary movement disorders
US11648244B2 (en) 2015-03-06 2023-05-16 Auspex Pharmaceuticals, Inc. Methods for the treatment of abnormal involuntary movement disorders

Also Published As

Publication number Publication date
US20160239095A1 (en) 2016-08-18
US20130332829A1 (en) 2013-12-12

Similar Documents

Publication Publication Date Title
JP7189895B2 (en) Method and system for generating a mixed reality scene based on virtual and real-world objects represented from different Vantage points in different video data streams
US10016679B2 (en) Multiple frame distributed rendering of interactive content
US10127722B2 (en) Mobile capture visualization incorporating three-dimensional and two-dimensional imagery
US8024671B2 (en) Three-dimensional graphic user interface, and apparatus and method of providing the same
US20160239095A1 (en) Dynamic 2D and 3D gestural interfaces for audio video players capable of uninterrupted continuity of fruition of audio video feeds
KR102502794B1 (en) Methods and systems for customizing virtual reality data
KR101212231B1 (en) Method for displaying advanced virtual reality blended of freedom movement
KR102433857B1 (en) Device and method for creating dynamic virtual content in mixed reality
CN109189302B (en) Control method and device of AR virtual model
US8938093B2 (en) Addition of immersive interaction capabilities to otherwise unmodified 3D graphics applications
US11044398B2 (en) Panoramic light field capture, processing, and display
US11698680B2 (en) Methods and systems for decoding and rendering a haptic effect associated with a 3D environment
Rumiński et al. Creation of interactive AR content on mobile devices
KR20140145217A (en) 3d virtual modeling system using spatial information and method thereof
KR20130133319A (en) Apparatus and method for authoring graphic user interface using 3d animations
US10732706B2 (en) Provision of virtual reality content
Fender et al. Velt: A Framework for Multi RGB-D Camera Systems
CN114327174A (en) Virtual reality scene display method and cursor three-dimensional display method and device
KR20200137594A (en) A mobile apparatus and a method for controlling the mobile apparatus
US11948257B2 (en) Systems and methods for augmented reality video generation
Shirazi Timeline visualization of omnidirectional videos
Havemann et al. 3D-Powerpoint-Towards a Design Tool for Digital Exhibitions of Cultural Artifacts.
Matysczok et al. Efficient creation of augmented reality content by using an intuitive authoring system
Lu et al. Interactive Augmented Reality Application Design based on Mobile Terminal
Roberts The AR/VR Technology Stack: A Central Repository of Software Development Libraries, Platforms, and Tools

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12736135

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 13981058

Country of ref document: US

122 Ep: pct application non-entry in european phase

Ref document number: 12736135

Country of ref document: EP

Kind code of ref document: A1